AI risk management puts ML code and data science in context

The rapid growth of AI has raised awareness that companies must master the legal and ethical risks that AI presents, such as racist algorithms used for hiring, mortgage underwriting and enforcement. law. It’s a software problem that calls for a software solution, but the market for AI risk management tools and services is nascent and highly fragmented.

Algorithmic auditing, a process for verifying that decision-making algorithms produce expected results without violating legal or ethical parameters, shows promise, but there is no standard for what audits should review and report on. . Machine learning operations (MLOps) bring efficiency and discipline to software development and deployment, but they typically ignore governance, risk, and compliance (GRC) issues.

According to Navrina Singh, Founder and CEO of Credo AI, what’s needed is software that connects an organization’s responsible AI efforts by translating the results of developers into a language and analytics that GRC leaders can use. can trust and understand.

Credo AI is a two-year-old startup that makes such software to standardize AI governance in an organization. In the podcast, Singh explained what it does, how it differs from MLOps tools, and what is being done to create standards for algorithmic auditing and other responsible AI methods.

Responsible AI Risk Management

Prior to launching Credo AI in 2020, Singh was director of product development at Microsoft, where she led a team focused on the user experience of a new SaaS service for enterprise chatbots. Previously, she held engineering and business development roles at Qualcomm, eventually leading its global innovation program. She is active in promoting responsible AI as a member of the US Government’s National AI Advisory Committee (NAIAC) and the Mozilla Foundation Board of Trustees.

One of the biggest challenges in AI risk management is how to make software products and MLOps reports from data scientists and other technicians understandable to non-technical users.

Navrina Singh

Emerging MLOps tools have an important role but do not handle the audit step, according to Singh. “What they do really well is examine the technical assets and technical metrics of machine learning systems and make those results available to data scientists and machine learning specialists to let them act,” she said. “Visibility on these results is not an audit.”

Credo AI attempts to bridge the gap by translating these technical “artifacts” into risk and compliance scores which it then turns into dashboards and audit trails tailored to different stakeholders.

A “trusted governance” repository includes artifacts created by data science and machine learning teams, such as test results, data sources, models, and their output. The repository also includes non-technical information, such as who reviewed the systems, where the systems rank in the organization’s risk hierarchy, and relevant compliance policies.

“Our belief is that if you are able to create this comprehensive governance repository of all evidence, then for the various stakeholders, whether internal or external, they can be held to higher standards of accountability,” Singh said.

Clients include a Fortune 50 financial services provider that uses AI in fraud detection, risk scoring and marketing applications. Credo AI has been working with the company for two and a half years to optimize governance and create an overview of its risk profile. Government agencies have used the tool to govern their use of AI in hiring, conversational chatbots for internal operations, and object detection apps to help soldiers in the field.

Creed AI screenshot
This screenshot shows Credo AI’s risk and regulatory compliance analysis of an AI-based CV parser.

New York state of mind

In January, a new law goes into effect in New York that prohibits employers from using automated employment decision tools unless the tools have been audited annually for racial, ethnic and racial bias. and sexist. The law also requires employers to post a summary of the bias audit on their websites.

Singh said much of Credo AI’s recent activity has come from companies struggling to comply with New York regulations., which she called a good local law. “He unfortunately didn’t define what an audit is,” she said.

The lack of widely accepted standards or even best practices for algorithmic auditing is widely seen as the Achilles heel of AI governance. The need will only grow as more local and national responsible AI laws come into effect.

Singh said what many companies did over the past year was just a review of their AI process, not a real audit. “I think we need to be very clear on the language because the audits really need to go to the standards,” she said.

According to Singh, some promising AI governance standards that fall under auditing are in the works. One is the risk management framework that the National Institute of Standards and Technology, who administers the NAIAC, actively working on. The European Union’s artificial intelligence law also takes a risk-based approach, she said. “The UK, on ​​the other hand, has taken a very context- and application-centric approach.”

Why is it so difficult to set standards?

“It’s hard because it’s so contextual,” Singh said. “You can’t have a peanut butter approach to regulating AI. I would say it will be a few more years before we see harmonized regulations or the emergence of holistic standards. I really believe this will be industry specific, use case specific.”

To listen to the podcast, click on the link above.

Comments are closed.