The EU AI Act Will Transform Practices for AI Governance In the U.S.
Reprinted with permission from the January 9, 2024 edition of Law.com © 2024 ALM Properties, Inc. All rights reserved. Further duplication without permission is prohibited.
On Dec. 8, 2023, American companies woke up to the New York Times headline “E.U. Agrees on Landmark Artificial Intelligence Rules.” The agreement over the AI Act solidifies one of the world’s first comprehensive attempts to bring governance to unlock innovation in AI.
U.S. companies have asked, what exactly does this development mean for their businesses?
Simply put, the EU AI Act will transform AI governance moving forward.
U.S. companies using AI in products/services directed to EU residents will be subject to a sweeping set of new governance obligations. While those obligations will not go into effect until two years after the final text of the law is published, most likely in 2026, U.S. companies should familiarize themselves ASAP with the new requirements so they can build their AI tools and governance programs in accordance with the forthcoming expectations.
What Is the Purpose of the AI Act?
The most recent draft of the EU AI Act defined its purpose of to codify “trustworthy AI.”
The purpose of this Regulation is to promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy, and the rule of law and the environment from the harmful effects of artificial intelligence systems in the Union while supporting innovation and improving the functioning of the internal market.
The Act will apply to both providers (e.g., tech companies licensing AI models or companies creating their own AI models) and deployers (companies that use or license) AI models.
What Will It Require?
There are several governance steps. This article sets out some of the most important ones.
Step One: Risk-Rank AI
The EU AI Act will cause American companies using AI on EU residents to engage in the exercise of risk-ranking AI. The EU AI Act divides AI into different categories:
- Prohibited uses (e.g., continuous remote biometric monitoring in public places or social scoring, along with another 13 specific examples). The original list of prohibited practices was proposed to be made more stringent in the EU Parliament version of the Draft Artificial Intelligence Act.
- High-risk (e.g., financial, health, children, sensitive, critical infrastructure and about 100+ other use cases). The original list of high-risk AI was proposed to be made more stringent in the EU Parliament version of the Draft Artificial Intelligence Act (June 14, 2023). Note: The author maintains a list of High Risk AI use cases.
- Minimal uses; and
- Low-risk AI (e.g., chatbots on retailer websites).
(See, European Commission’s Version of the Draft Artificial Intelligence Act, (April 21 2021), Title II “Prohibited Artificial Intelligence Practices,” Article 5 at p. 43-45, (last accessed Jan. 2, 2024). The original list of prohibited practices was proposed to be made more stringent in the EU Parliament version of the Draft Artificial Intelligence Act (June 14, 2023). See also, The EU Parliament Version of the Draft AI Act (adopted on June 14, 2023) at page 126-128 (last accessed Jan. 2, 2024); European Commission’s Version of the Draft Artificial Intelligence Act, (April 21 2021), at Section 5.2.2 at p. 12 (last accessed Jan. 2, 2024); European Commission, “Regulatory Framework Proposal on Artificial Intelligence” (last accessed Jan. 2, 2024.).)
To risk-rank effectively, companies will need to know the specific examples implicated by the Act, and it is important to review the laws that are referenced in the EU AI Act to come up with a complete list of high risk AI, for example. My review has identified 138 specific examples of high risk AI, and more examples will emerge. Therefore, companies need to bake in review of high risk examples and regulatory review to ensure they are up-to-date on high risk AI use cases.
U.S. companies using, developing, or distributing AI in the EU will need to revamp the activities of AI governance teams. A good way to think of these rankings is like a street light at an intersection. Prohibited AI is akin to a red light at an intersection. The EU is taking the position that there are certain specific activities that are prohibited because the possibility of harm is considered too great. The green light is for the “low/minimal risk AI.” Companies can proceed with relative comfort, and the only governance required is to be transparent with users when they are interacting with an AI.
The bulk of the governance obligations are reserved for High-Risk AI. Like the yellow light at a stop light where governments want drivers to proceed with caution is in the “yellow” zone, or “high-risk AI,” the EU government has identified “high risk AI” as an area where companies can proceed with AI use cases, but need to exercise caution (in the form of governance). With the use cases identified by the EU and review of related legislative schemes incorporated in the drafts of the EU AI Act, I have identified over 100 specific high risk AI use cases (including those concerning children, financial, health or critical infrastructure, among many others).
Step Two: Confirm High-Quality Data Use
The second step for US companies will be to confirm that their high risk AI use cases are training with “high-quality” data. This means there is accurate and relevant data going into the company’s high risk model. If a company is licensing a commercial large language model and then building its own application on top for a specific high risk use case, the licensee company will need to understand whether it has rights (e.g., IP and privacy) to use the data. This will require a U.S. company licensing a large language model to build their own customized high risk AI application, to know what data it is using to train its model and whether it has the rights (including IP and, if personal data is involved, privacy rights) to do so.
Step Three: Continuous Testing, Monitoring, Mitigation and Auditing.
The EU Act calls for testing, monitoring, and auditing, pre- and post-deployment of the high-risk AI, in the following areas:
- Algorithmic impact, or fairness/bias avoidance;
- IP;
- Accuracy;
- Product safety;
- Privacy;
- Separate from privacy, separate testing on cyber; and
- Antitrust
(See, European Commission Version of the Draft Artificial Intelligence Act, Explanatory Memorandum, Section 1.2 at p. 4; and at p. 29, paragraph (43) (April 21 2021) (last accessed Jan. 2, 2024) (Emphasis added). See also, The EU Parliament Version of the Draft AI Act (adopted June 14, 2023) at Article 10(2)(f)- (fa), at page 57 (last accessed Jan. 2, 2024.). See also, Technical Companion to the Whitehouse AI Bill of Rights at page 26 (last accessed Jan. 2, 2024).)
The reason underpinning this requirement is the issue of model drift. With generative AI, for example, data scientists have found that even when there is a quality training date going into the training, the model that resulted in accurate outcomes at first, over time, generative AI models can commonly drift from their original expectations for accuracy, fairness, etc. Accordingly, data scientists recommend that for companies licensing generative AI from large providers, and then building their own closed-loop enterprise-level applications on top of the large language model, the licensee companies should insert code within their application to set guardrails for testing in the seven areas of high-risk arms identified above. The benefit for companies is that if the model drifts, the company is alerted in real time so it can return the model to acceptable specifications.
As the capacity to test involves adding code to the application, this is an area where U.S companies would do well to adopt now, rather than build models without the capacity to test and either: a) experience model drift that harms the brand (as we have seen in recent headlines where companies were accused of producing AI models that were prescribing the wrong medication to people); or b) have to rebuild applications to embed testing two years from now, making it far more costly then to add this known requirement while models are being built. Such an approach would also be consistent with recommendations made in the Technical Companion to the US White House AI Bill of Rights.
While the requirement will not go into effect until 2026, people should understand that the tech company’s general use may not be considered “high risk” because it is not designed for a specific high-risk area. Does the capacity for testing, monitoring, and auditing exist within the AI systems themselves?
Step Four: Risk Assessment
The EU AI Act calls for a risk assessment based on pre-deployment testing, auditing and monitoring. The risk assessment should:
“consist of a continuous iterative process: and comprise the following steps: [footnote omitted]
- identification and analysis of the known and foreseeable risks associated with each high-risk AI system;
- estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;
- evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
- adoption of suitable risk management measures in accordance with the provisions of the following paragraphs.”
(See, European Commission Version of the Draft Artificial Intelligence Act, (April 21 2021) at Article 9 page 46 (last accessed Jan. 2, 2024). See also, Technical Companion to the Whitehouse Al Bill of Rights at page 18 (last accessed Jan. 2, 2024).)
This risk assessment needs to be reflected in both the logging and the metadata of the AI system itself. In addition, all mitigation efforts need to be logged there as well.
AI governance teams would do well to consider these steps.
Step Five: Technical Documentation
The EU AI Act will also call for the evidence of the continuous testing, monitoring, and auditing to be contained in the logging and metadata of the AI system itself. (See, European Commission Version of the Draft Artificial Intelligence Act, (April 21 2021), p. 7, p. 29, para. (43) (last accessed Jan. 2, 2024).)
U.S. companies will want to ensure that they are generating and maintaining all of the required technical documentation of the tests that have been run, the mitigation steps that have been taken, and the continuous monitoring process that is expected to be present in the AI system itself. Id.
The expense to generate technical documentation is relatively small if built into the AI tool from the beginning.
Therefore, this is an area where U.S. companies could take proactive steps to build these protections before the EU Act goes into effect, most likely in 2026.
Step Six: Transparency
The licensors and licensees of high-risk AI will be expected to be transparent with end-users regarding the capabilities and limitations of the AI. In addition, the systems will need to be explainable to a third-party auditor or regulator if necessary. The sixth consideration is that whatever an enterprise may learn based upon this required testing, monitoring, and auditing-taking, the risk mitigation measures discussed in step 4 are likely to be subject to transparency reports to customers/users that they are interacting with AI and the relative degree of the AI’s abilities. (See, Annex A, EU Council’s Revisions to the Commission’s Draft AI Act, p. 44, Recital 47 (last accessed Jan. 2, 2024).)
The main purpose of transparency reporting is to explain how the model is supposed to work and, similar to a nutrition label, what the model is and is not good for.
Step Seven: Human Oversight
In addition to the technical processes described above, the EU AI Act also calls for human intervention to correct deviations from expectations as close as possible to the time they occur. This human oversight can protect the brand in real-time and prevent things like product safety issues from festering for months before an annual audit occurs. (See, Id. at p. 44, Recital 48.)
Step Eight: Failsafe
If the AI cannot be restored to the approved parameters set in the testing phase, the trusted legal AI frameworks share a clear intention to make sure that there is a failsafe in place to kill the AI use if remedial mitigation steps cannot be effectuated. See, Id. at p. 45, Recital 50.
There is obviously a need to explore and discuss these factors in greater detail and at a more granular level so that all the ramifications can be examined more sufficiently. In the meantime, this summary of opinions about the proposed legal frameworks will provide necessary, actionable information to empower board members and CEOs to make well-informed risk, opportunity, and avoidance decisions.
The fines for violating the EU AI Act will be higher than the GDPR at up to 7% of gross revenue (global turnover). (See, The EU Parliament Committee Version of the Draft AI Act (May 16, 2023), Article 71 at p. 74-75 (last accessed Jan. 2, 2024).)
Conclusion
U.S. companies need to be aware that the requirements of the EU AI Act are similar to trends we are seeing in draft legislation and trustworthy AI regulatory frameworks in 96 countries and six continents. Accordingly, steps taken proactively to conform to the EU AI Act could now minimize risk and optimize excellent outcomes for U.S. companies developing, licensing, or using high-risk AI.
Conteúdo Relacionado
Serviços e Indústrias Relacionadas
