June 16, 2023

European Parliament Reaches Agreement on its Version of the Proposed EU Artificial Intelligence Act

Share

Other Author      Salome Peters, Legal Intern

Executive summary

  • On June 14, 2023 the European Parliament (the “Parliament”) approved its version of the draft EU Artificial Intelligence Act (the “EU AI Act”).
  • European institutions will now enter negotiations to reach an agreement on the final text. This means that even if the EU AI Act is adopted quickly, it will apply at the earliest in 2025.
  • Jurisdictional reach: If adopted, the EU AI Act will impose a set of obligations on both providers and deployers of in-scope AI systems used in or producing effects in the EU, irrespective of their place of establishment.
  • The Parliament’s text revolves around promoting trustworthy AI. Key proposals in the Parliament’s version include:
    • Expanding prohibition on certain uses of AI systems to include those involving remote biometric identification in publicly accessible spaces, as well as emotion recognition and predictive policing systems;
    • High-risk AI systems expanded to include systems used to influence voters or used in recommender systems of very large online platforms (“VLOPs”); and
    • Imposing requirements on providers of foundation models (i.e., AI systems trained on broad data at scale, designed for generality of output, and which can be adapted to a wide range of distinctive tasks), including those that power generative AI systems.

Background and Previous Version of the EU AI Act

The EU AI Act was first proposed in 2021 by the Commission to regulate the placing on the market, putting into service and use of AI systems. The Commission proposed a risk-based approach to AI, dividing AI systems into three main risk categories:

  • Unacceptable risk, which would be prohibited, such as social scoring or systems that explore vulnerabilities of specific groups of persons;
  • High risk, which would be permitted under the condition of compliance with strict conformity, documentation, data governance, design, and incident reporting obligations. These include systems used in civil aviation security, medical devices or the management and operation of critical infrastructure;
  • Limited risk, which would concern systems that directly interact with humans (such as chatbots), and are permitted as long as they comply with certain transparency obligations (i.e., end-users must be made aware that they are interacting with a machine).

Providers placing on the market or putting into services AI systems in the EU would be in scope irrespective of their place of establishment. Also in scope are deployers of AI systems located in the EU, as well as providers and deployers located in third-countries, where the output of the system is intended for use in the EU. For more information on the background of the EU AI Act, please refer to our previous Legal Update.

What’s New in the Parliament’s Version

Key amendments proposed by the Parliament in relation to the Commission’s version include:

Proposed Obligations for Providers of Foundation Models

If enacted, the Parliament’s proposal would impose obligations on providers of foundation models (i.e., AI systems trained on broad data at scale, designed for generality of output, and which can be adapted to a wide range of distinctive tasks). In the Parliament draft, providers of foundation models would be required to:

  • Demonstrate through appropriate design, testing and analysis that reasonably foreseeable risks have been properly identified and mitigated;
  • Only incorporate datasets that are subject to appropriate data governance measures for foundation models, including in regard to the suitability of data sources and possible biases;
  • Design and develop the model to achieve appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity;
  • Prepare extensive technical documentation and intelligible instructions of use that allows downstream providers to comply with their respective obligations;
  • Establish a quality management system to ensure and document compliance with the obligations above; and
  • Register the foundation model in an EU database to be maintained by the Commission.

Additionally, providers of foundation models used in generative AI systems would be obliged to disclose that content was AI-generated and ensure that the system has safeguards against the generation of content in breach of EU law. They would further be required to publish a summary of the training data used that is protected under copyright law.

Proposed Additional Obligations for Deployers of High-Risk AI Systems

While the Commission’s draft focused largely on providers of high-risk AI systems (i.e., the natural or legal person that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark), the Parliament’s version would expand the scope of obligations to deployers of such systems (i.e. the natural or legal person under whose authority the system is used, except where the system is used in the course of a personal non-professional activity). These would include implementing human oversight, monitoring robustness and cybersecurity measures and conducting a fundamental rights impact assessment taking into consideration the specific context of use before deploying a high-risk AI system.

Prohibited AI Practices and High-Risk AI Systems

The Parliament expanded the list of the AI practices that would be prohibited to include the following:

  • Facial recognition and any other form of real-time remote biometric identification systems in publicly accessible spaces. AI systems used for the analysis of recorded footage of publicly accessible spaces through ‘post’ remote biometric identification would also be banned, unless subject to pre-judicial authorization and necessary for the investigation of serious criminal offenses;
  • Predictive policing systems;
  • Biometric categorization systems that use sensitive characteristics of natural persons;
  • Emotion recognition systems used in law enforcement, border management, workplace, and educational institutions;
  • The creation of facial recognition databases on the basis of indiscriminate scraping of biometric data from social media or CCTV footage.

The list of high-risk AI systems was also amended to encompass AI systems aimed at influencing voters in political campaigns, or used in recommender systems (i.e., algorithms aimed at suggesting relevant items to end-users) of very large online platforms (“VLOPs”). For an overview of algorithmic transparency obligations imposed on recommender systems from VLOPs under the EU Digital Services Act, please refer to our previous Legal Update.

Timeframe for Incident Reporting

The timeframe for reporting serious incidents would be reduced from 15 days in the Commission’s version to 72 hours in the Parliament’s version.

Increased Fines

Penalties for non-compliance with the prohibition of certain AI practices would be increased from EUR 30 million or up to 6% of the total worldwide annual turnover in the Commission’s version to EUR 40 million, or 7 % of the total worldwide annual turnover of the offender, whichever is higher, in the Parliament’s version.

Next steps

The Commission, the Council of the EU (“Council”) and the Parliament will now start negotiations (trilogues) for an agreement on the final text of the EU AI Act. If adopted, the EU AI Act would be directly applicable across the Union without the need of further implementation into Member State law. Obligations imposed on providers, importers, distributors and deployers of AI systems would apply from 24 months following the entering into force of the regulation. This means that the EU AI Act would apply at the earliest in 2025.

What Businesses Should Be Doing Now

If adopted, the EU AI Act would increase the scrutiny over AI systems developed and deployed in the EU. Investors, developers and businesses relying heavily on AI systems likely to be considered high risk could benefit from conformity efforts at the early stages of development of AI systems, also aiming at increasing trust in their systems. Preliminary compliance steps might include:

  • Critically assess data governance practices for training of AI models in view of the potential requirements of the draft EU AI Act (as well as other global frameworks in the making);
  • Prepare the support documentation of the relevant AI system in line with potential conformity, documentation, data governance and design obligations; and
  • Assess processes in place for reporting incidents related to AI systems.

In addition, GDPR rules may already apply in relation to personal data fed into AI models. Compliance with the proposed EU AI Act will likely build on established GDPR practices. Existing cybersecurity rules and best practices should also be considered when developing or deploying AI systems.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe