janvier 15 2020

White House Proposes Binding AI Principles for Regulators

Share

On January 7, 2020, the White House issued a draft memorandum setting forth proposed principles for agencies to follow when regulating and taking non-regulatory actions affecting artificial intelligence (AI) in the private sector.1 The principles provide a road map for agencies to achieve objectives first described in the Trump administration’s Executive Order on Maintaining American Leadership in Artificial Intelligence, signed on February 11, 2019.2 Important for companies, the memorandum includes a call for the private sector to provide comments.

In keeping with that executive order, the principles seek to ensure American AI preeminence by focusing on AI adoption, innovation and growth, while respecting American values such as IP protection, economic and national security, privacy and civil liberties. Chief Technology Officer of the United States Michael Kratsios stated that the proposed principles, which are designed to “[e]nsure public engagement, limit regulatory overreach and promote trustworthy technology,” will help to secure America’s status as a “global hub[] of AI innovation.”3 He contrasted the principles to “the rise of technology-enabled authoritarianism in China,” an “authoritarian government[] that ha[s] no qualms about supporting and enabling companies to deploy technology that undermines individual liberty and basic human rights . . . .”

The principles repeatedly emphasize that federal agencies must avoid regulatory and non-regulatory actions that needlessly hamper AI innovation and growth. As well, the principles appear to welcome federal preemption of inconsistent, duplicative or burdensome state laws that prevent the emergence of a national AI market.

The United States has yet to promulgate a broad regulatory scheme that governs the development and use of AI, nor have US agencies collaborated to ensure consistent regulation of AI. Rather than outline a specific regulatory scheme, the principles provide guidance to align future regulatory action with the Trump administration’s goals, encourage interagency cooperation and avoid over-regulation. To help ensure that these objectives are achieved, the principles explicitly require that, within 180 days of issuance of the memorandum, federal departments and agencies inform the Office of Management and Budget (OMB) of how they plan to achieve consistency with the memorandum.

The Principles

The memorandum sets forth 10 principles that should guide AI regulation:  

1. Trustworthiness. AI adoption and acceptance requires public trust. Agencies, therefore, should “promote reliable, robust and trustworthy AI applications” to encourage public trust.

2. Public Participation. To further foster public trust, the principles strongly encourage agencies to invite public participation in the rulemaking process surrounding AI.

3. Scientific Integrity and Information Quality. Once again invoking the importance of public trust, the principles assert that agencies should employ high standards of quality, transparency and compliance with respect to information gathering for public policy or private sector decisions governing the use of AI. Agencies should be mindful that “for AI applications to produce predictable, reliable, and optimized outcomes, data used to train the AI system must be of sufficient quality for the intended use.”

4. Risk Assessment. Consistent with the desire to foster innovation, the principles caution against an unduly conservative approach to risk management: “It is not necessary to mitigate every foreseeable risk . . . all activities involve tradeoffs.”

5. Benefits and Costs. Again, the principles take a practical, innovation-friendly approach, advising agencies conducting a cost/benefit analysis not to compare proposed AI with a theoretically perfect reality but, rather, with “the systems AI has been designed to complement or replace” and “[the] degree of risk tolerated in other existing [systems].”

6. Flexibility. Regulations should not prescribe technical specifications. Instead, “agencies should pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications.”  

7. Fairness. The principles counsel that agencies should consider whether AI produces discriminatory outcomes “as compared to existing processes,” recognizing that AI has “the potential of reducing present-day discrimination caused by human subjectivity.”

8. Disclosure and Transparency. The principles note that transparency and disclosure (e.g., of the use of AI and how it affects users) may foster public trust and confidence. However, the principles instruct that the analysis is “context-specific” and, further, that agencies “consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measure[s].”

9. Safety and Security. The principles encourage the consideration of “safety and security issues throughout the AI design, development, deployment, and operation process.” Agencies should pay attention to the confidentiality and integrity of information processed, stored and transmitted by AI systems, as well as cybersecurity risks.

10. Interagency Cooperation. To achieve the memorandum’s goals, and ”to ensure consistency and predictability of AI-related policies,” interagency cooperation is essential.

Overlying these principles is a theme of pragmatic, light-touch regulation. The principles caution that agencies should not “hold[] AI systems to such an impossibly high standard that society cannot enjoy their benefits.” First and foremost, agencies should step back to encourage the growth of AI, entering the fray only when regulation is absolutely necessary, such as when public trust is compromised. That is the message reiterated time and time again by the White House in this guidance.

Non-Regulatory Approaches to AI

The memorandum suggests several non-regulatory approaches for agencies to address risks posed by AI applications. These include sector-specific policy guidance or frameworks, pilot programs and experiments that provide safe harbors for specific AI applications (e.g., hackathons, tech sprints) and voluntary consensus standards developed by the private sector and other stakeholders.

The memorandum also recommends that federal departments and agencies take an active role in facilitating the use and acceptance of AI by increasing public access to federal data and models for AI research and development, communicating with the public about the benefits and risks of AI (e.g., publishing AI-related requests for information in the Federal Register), working with the private sector to develop voluntary consensus standards and cooperating with international regulatory authorities to promote consistent approaches to AI.

Agency Plans

Within 180 days of the issuance of this proposed guidance, agencies must report to OMB any planned or considered regulatory actions and their consideration of the principles. Thus, the Trump administration is effectively requiring departments and agencies to reveal and re-evaluate anything coming down the pike that might affect the private sector’s use of AI. In addition to planned regulatory action, agencies must also “identify any statutory authorities specifically governing agency regulation of AI applications,” collections by the agency of AI-related information for the private sector and AI use cases within the agency’s purview. It is possible that agencies will request AI-related information from the private sector in the near future to inform their submissions to OMB.

Conclusion

This guidance to departments and agencies marks the first concrete step toward aligning AI-related regulatory and non-regulatory approaches with the president’s strategy on AI. Companies should keep these principles in mind when drafting their own ethical principles and governance on the use of AI, as we expect to see these themes reflected in any future regulation.

 


 

1 See Russell T. Vought, Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications (January 7, 2020), available at https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf.

2 See Kendall Burman, Brad Peterson, Rajesh De, David Beam, Alex Lakatos and Howard Waltzman (Mayer Brown LLP), Takeaways from Trump Administration’s New AI Strategy (March 4, 2019), available at https://www.mayerbrown.com/-/media/files/news/2019/04/takeaways-from-trump-administrations-new-ai-strategy.pdf.

3 See Michael Kratsios, AI That Reflects American Values (January 8, 2020), available at https://www.bloomberg.com/opinion/articles/2020-01-07/ai-that-reflects-american-values.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe