2023年7月07日

President Biden Convenes with AI Industry Leaders; Senator Schumer Proposes New SAFE AI Innovation Framework

Share

On Tuesday, June 20, 2023, President Joe Biden met with industry experts at the intersection of technology and society to discuss the opportunities and challenges of artificial intelligence (AI) development. President Biden emphasized the “need to manage the risks to our society, to our economy and our national security” and referenced the Blueprint for an AI Bill of Rights, issued in October 2022, to ground federal principles in an AI-powered world. The AI Bill of Rights includes a technical companion document that requires additional technical documentation for algorithmic discrimination testing, IP protection, and privacy measures and may provide a foundation for legislative and regulatory efforts in other state, federal and global AI frameworks. For example, Connecticut Senate Bill No. 1103, which will become effective July 1, 2023, already includes an intent to adopt a Connecticut state AI bill of rights based on the federal blueprint. California’s draft AB 331 (“Automated Decision Tools”) would also broadly require additional technical safeguards around AI tools, in addition to the CPRA’s regulations regarding automated decision making. Businesses will need to consider how to integrate technical accountability documentation and logging data into their AI governance and compliance programs as these regulatory developments continue to be promulgated across 84 state bills, 59 federal bills, and similar measures in 37 other countries across six continents.

President Biden’s meeting with industry experts was on the heels of another major federal effort to deal with AI. On Wednesday, June 21, 2023, during a keynote address at the Center for Strategic and International Studies, Senator Chuck Schumer unveiled a proposed bipartisan SAFE Innovation AI Framework. The SAFE Innovation AI Framework is a set of principles and guidelines for AI developers, companies, and policymakers. The Framework is intended to provide a baseline to protect security, transparency and accountability while maintaining innovation as its “north star.” The SAFE Innovation Framework consists of five key principles:

Overview of SAFE Innovation Framework

  1. Security: AI systems should be designed to ensure the safety of individuals and society as a whole. This includes ensuring that AI systems are secure, reliable, and resilient, and that they do not pose a threat to human life or property.
  2. Accountability: AI developers and users should be accountable for the decisions made by AI systems. This includes ensuring that AI systems are transparent, explainable, and auditable, and that addressing intellectual property, copyright, and liability concerns.
  3. Foundations: AI systems should promote American democratic values and protect elections.
  4. Explain: AI systems should be designed to provide clear and understandable explanations of their decisions and actions. Senator Schumer recognized explainability as “one of the thorniest and most technically complicated issues we face—but perhaps the most important of all.” It will be critical for AI developers to ensure that AI systems are interpretable, and that they provide meaningful feedback to users.
  5. Innovation: US-led AI innovation should be encouraged and supported, while ensuring that it is done in a responsible and ethical manner. This includes promoting research and development in AI, while prioritizing accountability, transparency, and security.

AI Insight Forums

In addition to proposing the SAFE Innovation AI Framework, Senator Schumer also recognized that Congress will need a process to develop comprehensive AI legislation that involves multiple committees with jurisdiction over industries and issues that are impacted by AI. The breadth of committees that will need to be involved in legislation will invariably slow the process of advancing legislation, even if it is a priority of the Majority Leader.

Beginning this fall, Senator Schumer plans to host a series of proposed “AI Insight Forums” to continue engaging with AI experts to better understand the opportunities and challenges of AI. Threshold considerations for the AI Insight Forums will include:

  • What is the appropriate balance between collaboration and competition among AI developers?
  • What is the appropriate degree of federal involvement in taxing and spending in the AI industry?
  • What is the necessary balance between private and open AI systems for AI development?

Intersection with European AI Act

On June 14, 2023, the European Parliament approved its version of the draft EU AI Act. European institutions are currently negotiating the final text of the legislation and forthcoming regulations AI system design, testing, and monitoring both pre-deployment and post-deployment.

During his speech on June 21, 2023, Senator Schumer stated his office has reviewed the European Union’s AI Act and other foreign legislative efforts and found that “none have really captured the imagination of the world.” Instead, Senator Schumer is committed to an “American proposal” for AI regulation. Senator Schumer’s proposal is on the heels of coordinated efforts by federal agencies and departments in the AI space. On April 25, 2023, the Federal Trade Commission (FTC), Department of Justice Civil Rights Division (DOJ), Equal Employment Opportunity Commission (EEOC), and the Consumer Financial Protection Bureau (CFPB) issued a joint statement summarizing each department’s and agency’s work to root out possible discrimination in AI systems. These efforts in the United States and European Union represent a broader global effort to link technical standards with policy recommendations and will require businesses to adapt a holistic approach for compliance.

最新のInsightsをお届けします

クライアントの皆様の様々なご要望にお応えするための、当事務所の多分野にまたがる統合的なアプローチをご紹介します。
購読する