6.1 C
New York
Wednesday, November 19, 2025

The New Imperative of Responsible AI


Hallmarks of Responsible AI

Over the past couple of years, many companies have been considering establishing or managing AI governance frameworks. When we meet with companies to evaluate their AI governance approach, we query them to understand: What are your AI policies and governance structures? Who staffs them and what are key roles and responsibilities? How are they resourced? What frameworks do you use? What have these efforts looked like in practice, and are you assessing and reporting on them regularly?

The company engagement often involves an active, multi-year approach that can yield tangible results such as publishing inaugural AI principles, implementing or strengthening AI governance, mitigating algorithmic biases in products and services, and assessing the downstream human rights risks and lifecycle impacts of AI products. We’ve seen success working with companies to establish basic building blocks to demonstrate that their systems are performing and identifying risks well.

Best practices for companies using AI include establishing formal internal governance structures with board-level oversight of material issues and aligning risk-management programs with established external frameworks such as The National Institute of Standards and Technology (NIST) AI Risk Management Framework. This framework offers a voluntary and structured approach to identifying, assessing and mitigating AI-related risks that emphasizes key characteristics of trustworthy AI. It’s a flexible framework that can be applied by companies of all sizes and in any sector.

Companies should have policies, practices and controls that prove these systems work, including:

• Establishing clear governance structures throughout business strategies that rely on AI and disclosing a set of AI principles and commitments.

• Integrating safety, bias and privacy considerations into product life cycles.

• Providing clear insight into the lifecycle processes for product review.

• Creating, strengthening and updating policies and programs for AI applications, to consider individual impacts and unintended consequences and provide methods for resolution or escalation of material issues.

• Disclosing performance metrics, such as how many AI products were reviewed for heightened risks and what percentage were consequently modified or halted.

Transparency and accountability are also important aspects of building trust in the AI outcomes. Companies are making significant investments in AI. Investors should be able to understand and evaluate the company’s efforts and progress. Companies should also demonstrate how their risk management processes prepare their businesses to mitigate these risks and capitalize on opportunities.

A Business Imperative

Just as GenAI has become an essential part of business operations, the responsible use of AI has become an essential strategy and component of risk management. And as AI systems become more powerful and embedded in society, investor stewardship to understand company policies and how they align with best practices through company engagements and proxy voting is increasingly critical.

This is not a temporary trend or a passing concern. It represents a fundamental shift in evaluating corporate responsibility and long-term value creation. The companies and investors that lead this transition can help ensure that this transformative technology is developed in a sustainable way that is aligned with long-term value creation.

Real-World Impact: Case in Point

While artificial intelligence presents a spectrum of challenges, from the massive energy consumption of data centers to concerns about worker displacement, our stewardship approach focuses on the ethical considerations and downstream risks AI poses when business outcomes have unintended consequences. Our engagements on this topic often look for assurance, transparency and accountability among companies integrating AI features into their products.

In 2023, we began engaging Intuit (INTU) to better understand how their AI governance approach was aligned with best practices. The financial software provider has integrated AI throughout its product offerings in tax and personal finance software as well as its email marketing platform. Following our engagement on the topic, Intuit released public disclosure on AI governance and risk management practices in the first half of 2025.

Investor Perspective: Responsible AI Recommendations

This year, Parnassus led the development of a set of investor recommendations for responsible AI in partnership with other investors. The document is intended to be a helpful reference for companies considering how to approach an ethical technology governance structure.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles