mai 05 2021

The European Union Proposes New Legal Framework for Artificial Intelligence

Share

On 21 April 2021, the European Commission proposed a new, transformative legal framework to govern the use of artificial intelligence (AI) in the European Union. The proposal adopts a risk-based approach whereby the uses of artificial intelligence are categorised and restricted according to whether they pose an unacceptable, high, or low risk to human safety and fundamental rights. The policy is widely considered to be one of the first of its kind in the world which would, if passed, have profound and far-reaching consequences for organisations that develop or use technologies incorporating artificial intelligence.

Background

The European Commission's proposal has been in the making since 2017, when EU legislators enacted a resolution and a report with recommendations to the Commission on Civil Law Rules on Robotics. In 2020, the European Commission published a white paper on artificial intelligence. Last October, the European Parliament issued a resolution with recommendations to the Commission on a civil liability regime for artificial intelligence.

The proposal issued last month draws from all of these documents in seeking to "address the risks and problems linked to AI, without unduly constraining or hindering technological development." These twin objectives of the regulation, which are namely, the maintenance of both trust and excellence in AI technology, were echoed in a statement by Margrethe Vestager, the European Commission's executive vice president for the digital age, upon publication of the proposal on 21 April, in which she said "With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way for ethical technology worldwide and ensure that the EU remains competitive along the way."

Application & Framework

The proposed rules would apply to all providers of artificial intelligence systems who place on the market (or put into service) artificial intelligence systems in the EU, irrespective of whether they are established within the EU, as well as to users of artificial intelligence systems established within the EU. The rules would also apply to providers and users of artificial intelligence systems where the output produced by the system is used in the EU.

In adopting a risk-based approach to artificial intelligence, the proposal distinguishes among unacceptable, high, and low risks posed to the fundamental rights and safety of AI users.

Unacceptable risks, for example, include uses of AI that manipulate vulnerabilities of specific groups of people or attempt to evaluate individuals over time so as to give them a "social score." The use of "real-time" remote biometric identification systems in publicly accessible spaces is also deemed to pose an unacceptable risk to human safety and fundamental rights, although certain limited exceptions apply.

High-risk AI systems, on the other hand, could include functions related to critical infrastructure, educational training, and employee selection. Uses of artificial intelligence in this category will be subject to strict requirements that must be met and will need a completed conformity assessment before the technology may enter the European market. For example, providers must establish, maintain and document rigorous risk management systems, ensure an appropriate type and degree of transparency to users, and make provisions for effective oversight and control of the AI technology by natural persons.

For low-risk AI systems that interact with humans, detect emotions or determine association with social categories based on biometric data, or generate or manipulate content (such as those used to create "deep fakes"), transparency obligations will have to be complied with. Users must be notified of the circumstances surrounding their interaction with the AI system so as to allow those users to make an informed choice in continuing to use the technology.

Governance & Enforcement

With respect to governance, the proposal would establish a European Artificial Intelligence Board composed of representatives from the EU Member States and the Commission in order to facilitate implementation of the regulation. At the national level, EU Member States will be responsible for designating competent authorities to take all measures necessary to ensure that the rules are property and effectively implemented.

Non-compliance with prohibitions or requirements laid out in the regulation would be severe, with penalties of up to €30,000,000 or 6% of total worldwide annual turnover (whichever is higher).

Global Context & Next Steps

Over the last decade, the EU has played a leading role in shaping and transforming regulation on the use of emerging technologies and data worldwide, most notably with the General Data Protection Regulation and most recently with its proposals on the Digital Services Act and Digital Markets Act. If enacted, this regulation could act as a similar blueprint for future artificial intelligence regulations adopted by other countries around the world. Still, it could be several years before the proposed regulation becomes law in the EU. Most immediately, the European Parliament and EU Member States will have to adopt the proposal for it to come into force. Once adopted, the AI regulation will be directly applicable across the EU.

Alongside the new legal framework on AI, the European Commission has also proposed new rules on machinery products to ensure that the new generation of machinery guarantees the safety of users and consumers, as well as the safe integration of the AI systems into machinery.

While the EU is one of the first to move the regulation of artificial intelligence forward, it joins a host of other regions that have demonstrated a similar interest in regulating current and emerging technologies more closely. The UK Government recently announced its intention to introduce new legislation to regulate the security of consumer smart devices and President Biden of the United States has also recently signalled plans to rein in Big Tech by selecting prominent antitrust scholars for positions in the Federal Trade Commission and the National Economic Council. Proposals to further regulate the uses of technology and data are likely to emerge around the world in the coming years.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe