October 31, 2023

President Biden Issues Broad Executive Order on Artificial Intelligence

Share

On October 30, 2023, President Joe Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “AI EO”). Directing numerous actions by federal agencies, the AI EO reflects the Biden Administration’s intent to employ a range of legal and policy tools to promote US leadership on artificial intelligence (“AI”) while reducing the associated risks.1

The AI EO directs the creation, over the next year, of best practices and regulations to promote safety, cybersecurity, privacy, fairness, and competition. Government action will also include studies on uses of AI across government agencies and industries, and measures to support development of the technology.

The AI EO provides these directions across eight operative sections: (i) ensuring the safety and security of AI technology, (ii) promoting innovation and competition, (iii) supporting workers, (iv) advancing equity and civil rights, (v) protecting consumers, patients, passengers, and students, (vi) protecting privacy, (vii) advancing federal use of AI, and (viii) strengthening American leadership abroad. Below, we highlight key elements of these sections that could have important implications for businesses, including the creation of:

  • Best practices for development and deployment of AI across industries;
  • Regulatory guidance or requirements for certain AI uses, such as in critical infrastructure, biomedical research, and hiring and employment; and
  • New government oversight of certain high-risk AI models and infrastructure service providers.

Ensuring the Safety and Security of AI Technology

Section 4 of the AI EO sets out government actions intended “to protect Americans from the potential risks of AI systems.”

Best Practices for Developing and Testing AI Technology: Section 4.1 requires that the National Institute of Standards and Technology (“NIST”) issue guidelines for “developing and deploying safe, secure, and trustworthy AI systems.” This includes secure development practices for generative AI2 and dual-use foundation models,3 as well as guidance for evaluating and auditing AI capabilities which could cause harm, such as through conducting “red-teaming tests.” Section 4.1 also directs the Department of Energy to develop a plan for testing AI models for risks related to nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy security threats.

Requirements for Dual-Use Foundation Models and Infrastructure as a Service Providers: Invoking authority under the Defense Production Act, Section 4.2 of the AI EO directs the Department of Commerce to require ongoing reports from entities “developing or demonstrating an intent to develop potential dual-use foundation models” regarding (i) development activities, including security protections around these activities, (ii) ownership and possession of model weights and security around these model weights, and (iii) red-team safety tests.

In addition, the Department of Commerce must establish know-your-customer and related reporting requirements that apply to foreign persons transacting with an Infrastructure as a Service Provider or their reseller. At a minimum, the AI EO dictates that these processes involve verifying the identity of the foreign person, maintaining records about the foreign person, and reporting certain transactions.

Risk Management and Regulations for Critical Infrastructure: Section 4.3 tasks agencies responsible for critical infrastructure with assessing relevant risks of AI. Additionally, these agencies will create safety and security guidance that adapts the NIST AI Risk Management Framework for their sectors, with the ultimate plan of implementing regulatory mandates. Specific to perceived biosecurity risks, Section 4.4 further requires the creation of a framework to manage these risks that will be incorporated into federal funding requirements.

Authenticating and Watermarking: Responding to concerns about the use of AI to enable fraud, Section 4.5 directs the Department of Commerce to develop guidance for authentication and watermarking to label AI-generated content. While the guidance will be directed to federal agencies, the AI EO aims to set expectations for the private sector, including by adding requirements in government contracts.

Promoting Innovation and Competition

In Section 5, the AI EO aims to support US leadership in AI innovation through funding research, supporting small developers and businesses, addressing intellectual property concerns, and recruiting talent to the United States.

Intellectual Property: Section 5.2 sets out plans to address issues related to intellectual property rights that currently are creating some uncertainty around usage of AI. First, the AI EO directs the US Patent and Trademark Office to publish guidance on patent eligibility related to inventorship and the use of AI, including generative AI. Second, the Copyright Office will be involved in issuing recommendations on potential executive actions relating to copyright and AI, including the scope of protection of AI works and the treatment of copyrighted works in AI training.

Attracting Talent to the US: Section 5.1 establishes programs and policies to recruit and develop AI talent in the United States. This includes a series of steps to streamline and promote visas for foreign individuals with AI-related skills.

Supporting Workers

In addition to considering labor-market impacts from AI, the Biden Administration warns against “the dangers of increased workplace surveillance, bias, and job displacement.” Section 6 of the AI EO sets out efforts intended to mitigate these risks. Specifically, the Department of Labor will publish best practices for employers that address job displacement; workplace equity, health, and safety; and employee data collection. The AI EO further directs the Department of Labor to create guidance “to make clear that employers that deploy AI to monitor or augment employee’s work must continue to comply with protections that ensure that workers are compensated for their hours worked.”

Advancing Equity and Civil Rights

Building on the Blueprint for an AI Bill of Rights and prior actions, Section 7 of the AI EO outlines actions focused on combating discriminatory or biased uses of AI. While primarily focused on criminal justice and government benefits administration, Section 7 directs the Secretary of Labor to publish guidance for federal contractors on nondiscrimination in hiring when using AI hiring systems as well as the directors of the Federal Housing Finance Agency, the Consumer Financial Protection Bureau, and the US Department of Housing and Urban Development to use their authorities to address discrimination and bias in AI tools used in the housing and consumer financial markets.

Protecting Consumers, Patients, Passengers, and Students

Section 8 of the AI EO focuses on assessing the effect of AI on the healthcare, transportation, education, and communications sectors. This includes directing relevant agencies to study and develop strategies for the use of AI as well as consider the creation of best practices or regulations. For instance, the Department of Health and Human Services will establish policies and frameworks on deployment of AI in the health sector, including in research, drug and device safety, and healthcare delivery.

Protecting Privacy

Section 9 of the AI EO focuses on steps for evaluating and improving privacy protections for information that the federal government collects and uses. This includes assessing the personal information that agencies collect and developing guidelines for agencies to use “privacy-enhancing technologies.” While only applicable to federal agencies, the guidelines that are developed may inform views on private sector activities. In addition, in the accompanying fact sheet, the Biden Administration repeated a call on Congress to pass data privacy legislation.

Advancing Federal Government Use of AI

Section 10 of the AI EO specifies actions to facilitate the adoption of AI across the federal government. The AI EO requires agencies to establish a Chief Artificial Intelligence Officer position, report on their programs to use AI, and promote hiring of AI talent. It also specifically discourages agencies from broadly banning use of generative AI.

This section further addresses steps to support federal acquisition of AI products from the private sector, including issuing guidance to ensure contracts align with safety and security guidance for AI systems, facilitate government-wide acquisition solutions for certain AI products, prioritizing funding for AI projects, and establishing a framework to prioritize AI offerings in the Federal Risk and Authorization Management Program authorization process.

Strengthening American Leadership Abroad

In Section 11, the AI EO directs actions intended to further US global leadership on standards and frameworks for development and deployment of AI. In collaboration with the Department of Commerce and Department of Homeland Security, the State Department will expand engagement and lead efforts to establish international AI frameworks. The Department of Homeland Security, for example, is tasked with creating a plan to encourage the international adoption of safety and security guidelines for critical infrastructure.

Next Steps

While the precise pace and course for implementation remains to be seen, the AI EO sets out timelines ranging from 30 days to a year for agencies to act. Given the wide range of actions planned, companies should evaluate priorities to monitor for opportunities for private sector input as well as legal and regulatory developments.

We expect that the AI EO will not diminish Congressional efforts to enact a statutory framework to govern AI development and use. While those efforts are ongoing and lawmakers from both political parties agree with many of the policy initiatives included in the AI EO, such as safety testing, risk assessments, enhanced cyber-security for AI systems, and global coordination of AI governance, such initiatives would be on more solid legal footing if Congress enacts an AI law. The AI EO will also likely not stop state legislative efforts to pass AI laws. Congress, however, could preempt such laws when creating a national framework.

 


1 White House, Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (Oct. 30, 2023).

2 The AI EO defines generative AI to encompass “the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.”

3 A dual-use foundation model is defined as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe