2024年2月06日

A Proactive Approach to AI Legal Frameworks is Critical for Success

Share

The use and capabilities of artificial intelligence (“AI”) have tremendous promise, but with such significant potential up-side comes myriad risks. AI models can “hallucinate” (i.e., invent inaccurate facts) and “drift” (i.e., deviate from the model’s original design and intended purpose), which could lead to biases, inaccurate results and potentially harmful outcomes. A key to harnessing the potential value of AI while managing its risks is to develop and implement thoughtful processes and protocols to govern the company’s approach to AI.

Governments in 96 countries across six continents have put out draft legislation and regulatory frameworks aimed at making the use of AI safe. While many jurisdictions are still working through final rules and regulations, on December 9, 2023, European Parliament negotiators and the Council presidency agreed on the final version of what is claimed to be the world’s first-ever comprehensive legal framework on AI: the European Union Artificial Intelligence Act (the “EU AI Act”), which, upon obtaining final approval from the EU members states, is expected to go into effect in 2026.

The EU AI Act, like much of the proposed legislation across the globe, categorizes different types of AI into different categories, with “high risk” AI requiring more stringent monitoring and governance than “limited risk” AI, and unacceptably high-risk AI being prohibited altogether. To be well-positioned to comply with legislation and more adequately manage AI-related risks, companies should review their current processes and protocols governing the use of AI.

How to Think About AI Governance: The Colors of a Traffic Light

Compliance with the various AI legal frameworks will require that companies risk-rank their AI use cases. Risk-ranking within the legal frameworks can be thought of as a traffic light—prohibitively risky use cases (red light), low- or minimal-risk use cases (green light) and high risk use cases (yellow light). Below provides a high-level overview of an approach to an AI governance framework based on early drafts of proposed AI legislation around the world, which is described in more detail in the recently published book, “Trust: Responsible AI, Innovation, Privacy and Data Leadership”, written by the author of this article. Categorizing the company’s AI use cases into these three categories could serve as the foundation of a company’s AI governance policy.

Red Light (Prohibited AI): There are 17 specific cases, such as voting surveillance or continuous public monitoring, that are strictly off-limits due to their threat to democratic values and privacy. Governments and regulators are already drawing clear lines in the sand.

Green Light (Low Risk): Certain AI use cases have safely navigated the ethical landscape for years and pose minimal risk of bias or safety concerns. This category generally includes use cases such as chatbots and AI used for customer service, product recommendations, and video games.

Yellow Light (High Risk): Most AI use cases fall into this category. This includes applications ranging from HR and finance to manufacturing and surveillance. For AI use cases in this category, companies are well-served by using caution and implementing thoughtful AI governance protocols.

Navigating the Yellow Light:

  • Use High-Integrity Data: Fueling the AI model with accurate, high-quality, relevant data with verifiable ownership is the bedrock of responsible AI.
  • Test the Model Continually: Pre- and post-deployment testing of AI models for bias and accuracy is crucial to ensure safety, appropriate privacy and confidentiality, and compliance. Do not wait for a crisis to test AI models. Recall that even the most sophisticated AI models can "drift"—catch it early.
  • Document Technical Aspects of Data: Ensure that there is real-time logging and meta data. This is critical so that in the event of an issue with AI outputs, for example, the technical data can be reviewed by humans to pinpoint the moment the AI began to drift and which outputs were tainted by the drifting.
  • Include Human Oversight: AI governance cannot be self-executed by AI systems. Humans must be involved in the quality control process to ensure that output is operating as expected and is consistent with the company’s expectations. In the event that the AI model drifts, a human will need to review the technical documentation, including records of the logging and meta data, to diagnose the issue and get the model back on track.
  • Implement Fail-Safes: Define clear stopping points. There are instances when an AI model continuously deviates, producing tainted output. If deviations cannot reliably be corrected on an ongoing basis, it could be in the company’s best interest to cease use of the particular AI model.

With AI-related regulatory frameworks and draft legislation proposed across the globe, it is critical that companies conduct self evaluations regarding their approach to AI controls. The “traffic light” risk-ranking approach outlined above provides a high-level overview of how companies can better prepare for and anticipate what will be required of them, in light of forthcoming AI-related laws and regulations. Given the speed at which AI can evolve—for better and for worse—it behooves companies to take a proactive approach when it comes to AI processes and protocols.


関連サービスと産業

最新のInsightsをお届けします

クライアントの皆様の様々なご要望にお応えするための、当事務所の多分野にまたがる統合的なアプローチをご紹介します。
購読する