2023年9月29日

What Boards Need to Know Regarding the Forthcoming Artificial Intelligence Related Legal Frameworks and What They Can Do to Prepare

分享

Currently, there are artificial intelligence (“AI”)-related legal frameworks pending or proposed in 37 countries across six continents. Even within each particular country, multiple governmental agencies are claiming AI as within their jurisdictional reach. For example, in the United States, the Consumer Financial Protection Bureau, Department of Justice, Equal Employment Opportunity Commission, Food and Drug Administration, Federal Trade Commission and the Securities and Exchange Commission each has issued guidance or otherwise indicated through enforcement activity that they view AI as within their respective purview of current regulatory and enforcement authority. In a world where AI, particularly generative AI, continues to weave its way into, among other things, businesses, marketing and public relations efforts, internal administrative actions and hiring processes, it may be prudent for boards of directors to understand the key implications of the new laws and their possible effect on the company’s use of AI. While the first of the many new AI-related laws is not anticipated to go into effect until 2025, now is the time for boards to begin gaining familiarity with the key requirements coming down the pike and to consider a pro-active approach to compliance.

The Purpose of the New AI Legal Frameworks and Potential Consequences of Non-Compliance

The new AI legal frameworks are designed to balance the interest in encouraging innovation with concerns about human rights and civil liberties, privacy rights, anti-discrimination interests, consumer safety and protection, intellectual property protection, information integrity, security and fair business practices. For example, in the European Union, if the proposed law goes into effect in its current form, non-compliance may result in monetary penalties that, in some cases, project to be larger than the General Data Protection Regulation’s highest fine levels of 4% of a company’s gross revenue—the new European Union Parliament legislation proposes a monetary fine of up to 7% of a company’s gross revenue for non-compliance. Additional potential consequences of non-compliance include reputational damage, and, in certain cases based on the pending or proposed laws, personal liability.

Action Items for Companies to Consider

With so many AI-related laws pending or proposed all over the world by a variety of different regulatory bodies, it is unsurprising that not all of these new rules are precisely aligned. However, there are key overlapped themes among the majority of these pending or proposed laws. Generally, the new AI legal frameworks call for categorizing the risk level of each of the company’s use cases for AI, with such categories including: (a) prohibited use, (b) high risk and (c) minimal or low risk. If “high risk” AI is present, then the pending or proposed laws generally require that such AI systems are required to undergo continuous testing, monitoring and auditing in areas including privacy, cybersecurity, intellectual property, antitrust, algorithmic bias, accuracy and consumer product/health/safety. There are over 70 “high risk” use case categories across the various AI-related frameworks, including AI use cases relating to pharmaceuticals, medical devices, manufacturing, personal finance, employment, health and critical infrastructure, among many others. These “high risk” use case categories are broadly defined—for example, simply stating “employment” and not going into further specificity, so we expect that most companies that use generative AI will have at least some instances of “high risk” use and, therefore, may need to comply with the required testing, monitoring and auditing depending on which laws they are ultimately subject to, among other factors. Once the various new laws take effect, they will generally require companies to put into place, or update accordingly, their AI-governance policies and enterprise risk management (“ERM”) programs.

By way of example, many companies have been using or gearing up to use AI-technology to help with their employee hiring processes, utilizing AI to do tasks such as routine resume screening of applicants. Across various use cases in recent years, inherent bias in AI models used for resume screening has resulted in applicant pools in which women and minorities were disproportionately screened-out, as compared to when such screening is performed by humans. While AI screening can certainly increase efficiency, it may result in biased outputs—the new AI laws seek to keep this in-check.

Which Legal Framework Will Go Into Effect First?

The European Parliament approved its version of the European Union Artificial Intelligence Act (the “EU AI Act”) on June 14, 2023 and the EU AI Act is likely to be the first among the various pending or proposed AI-legal frameworks to go into effect. The final version of the EU AI Act is currently being negotiated among the relevant EU-regulators, and it is anticipated that, at the earliest, the final version of the EU AI Act will go into effect in 2025. For more details, see our Legal Update, “European Parliament Reaches Agreement on its Version of the Proposed EU Artificial Intelligence Act.”

What Can Directors Do to Prepare for the Forthcoming Effectiveness of Applicable AI-Legal Frameworks?

Step 1: Stay Current on the Relevant Legal Landscape. There are over 198 AI-related laws and draft pieces of legislation pending or proposed in jurisdictions around the world, many of which implicate board accountability. While directors cannot know the details of each piece of legislation, boards of directors should consider asking for information about the key legal requirements that apply to the company and being briefed on the best practices for directors with respect to compliance and oversight. The AI legal frameworks, much like the AI they seek to govern, are continually evolving. With the risk of significant monetary fines, reputational damage and potential personal liability, it is prudent for directors to be well-positioned to ensure the company’s compliance and proper oversight of AI-related risks.

Step 2: Ask Management the Right Questions. In exercising their oversight role, boards of directors should consider asking thoughtful and strategic questions about the company’s use of AI to ensure that internal processes line up with both company strategy and legal requirements.

Boards can consider asking management: (1) How are we using AI? (2) How are we testing, monitoring and auditing for accuracy, fairness, elimination of bias, privacy, separate and distinct for cybersecurity, product safety, IP and antitrust considerations? (3) How can we review and approve governance policies for AI that include human review by management? (4) How are we identifying and mitigating any new cyber-related risks introduced by AI use cases? (5) Are we developing AI in accordance with putative legislative and regulatory expectations?

Step 3: Continue to Monitor “Mission Critical” Risks and Keep Records. Consistent with directors’ fiduciary duties with respect to overseeing the company’s “mission critical” risks, it may benefit directors to be educated about the company’s AI risk exposure, as well as potential business and financial impacts. For purposes of establishing a solid oversight record, relevant topics would ideally be documented in board meeting agendas and resulting discussion recorded in meeting minutes.

At the board level, policies can be used to ensure that underlying data and AI-technology is evaluated like any other asset of the company. For more details on director duties and recommended precautions, see our Legal Update, “Generative Artificial Intelligence and Corporate Boards: Cautions and Considerations.”

Step 4: Evaluate the Need for AI-Related Substantive Trainings and/or Experts. With an ever-evolving technological landscape, use cases and opportunities, the world of AI is far from static. As AI becomes more critical to the business, it is important that boards be kept generally apprised of any key corresponding risks. While this should not require a detailed technological understanding of all things AI, a certain level of understanding may be helpful to fully understanding the risks. If a board does not have board members with AI experience, it may consider retaining third party advisors to support the board, in addition to any relevant education that may be able to be provided in-house, either through management briefings or by elevating leaders proficient in AI-related issues to the board. In cases where AI is integral to the company’s business and no current board members are sufficiently versed in the field, boards may consider adding a director with the relevant substantive experience and skillset.

Step 5: Be Proactive with Compliance. It is prudent for top-level management and boards to understand use cases for AI and associated risks for the company, and to develop an appropriate governance framework that adequately addresses the relevant legal requirements. As AI-use cases continue to rapidly advance, companies that streamline effective AI-governance early on will be better positioned to move quickly to integrate new use cases while also complying with the AI-related laws that will come into effect.

相关服务及行业

及时掌握我们的最新见解

见证我们如何使用跨学科的综合方法来满足客户需求
[订阅]