July 07, 2023

UK's Approach to Regulating the Use of Artificial Intelligence

Share

The UK Government published its AI White Paper on 29 March 2023, setting out its proposals for regulating the use of artificial intelligence (AI) in the United Kingdom. The White Paper is a continuation of the AI Regulation Policy Paper which introduced the UK Government's vision for the future "pro-innovation" and "context-specific" AI regulatory regime in the United Kingdom.

The White Paper proposes a different approach to AI regulation compared to the EU's AI Act. Instead of introducing a new far-reaching legislation to regulate AI in the United Kingdom, the UK Government is focusing on setting expectations for the development and use of AI alongside empowering existing regulators like the Information Commissioner's Office (ICO), the Financial Conduct Authority (FCA), and Competition and Markets Authority (CMA) to issue guidance and regulate the use of AI within their remit.

However, it remains to be seen if the UK Government will make any further changes to the approach proposed in the White Paper as a part of UK's drive to take a coordinated approach with key international partners such as the United States and other G7 countries. In this context, the UK Government has announced its intention to host a global summit on AI safety in autumn 2023 to "agree safety measures to evaluate and monitor the most significant risks from AI".

What are the key takeaways for businesses from the White Paper?

Scope:

Unlike the draft EU's AI Act, the White Paper does not propose an overarching definition of what the UK Government means by "AI" or "AI system". The White Paper defines AI by reference to two characterises – adaptivity and autonomy – to future-proof the proposed regulatory framework against new technologies. While the lack of precise definition of AI might create some legal uncertainty, it will be up to individual regulators to issue guidance to businesses setting out their expectations about the use of AI within their remit.

The regulatory approach proposed by the UK Government in the White Paper applies to the whole of the United Kingdom. The White Paper does not propose changing the territorial applicability of existing UK legislation relevant to AI. Practically, this means that where existing legislation relating to the use of AI has extra-territorial application (such as the UK General Data Protection Regulation), the guidance and enforcement of existing regulators might also have effect outside the United Kingdom.

A principles-based approach:

The regulatory framework proposed in the White Paper is underpinned by five broad cross-sectoral principles:

  1. Safety, security, robustness: AI systems should function safely, meaning regulators may need to introduce measures for regulated entities to ensure their AI systems are technically secure. Regulators may also need to consider providing guidance that is coordinated and coherent with the activities of other regulators.
  2. Appropriate transparency and explainability: AI systems should be appropriately transparent and explainable, meaning parties should have access to the decision-making processes of an AI system. This is important in increasing public trust, a significant driver of AI adoption. The White Paper acknowledges that regulators may need to find ways to encourage relevant life cycle actors to implement appropriate transparency measures.
  3. Fairness: AI systems should not undermine the rights of individuals and organisations, discriminate unfairly or create unfair outcomes. The White Paper acknowledges that regulators may need to develop and publish descriptions and illustrations of fairness that apply to AI systems within their domains.
  4. Accountability and governance: AI systems should be subject to governance measures ensuring effective oversight with clear lines of accountability across the AI life cycle. Regulators must look for ways to ensure that clear expectations for regulatory compliance and good practice are placed on actors in the AI supply chain.
  5. Contestability and redress: The White Paper proposes that, where appropriate, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates a material risk. Regulators will be expected to clarify methods available to third parties to contest AI decisions and receive redress.

Initially, the White Paper proposes that the five principles will be issued on a non-statutory basis. The White Paper envisages that over the next year, regulators will publish guidance interpreting the principles in their domain, including practical tools to help companies comply with the principles. However, the White Paper anticipates that the UK Government will introduce a statutory duty on regulators to have "due regard" to the principles in the future.

Empowering existing regulators:

Unlike the draft EU's AI Act, the UK Government does not aim to create a new AI regulator. The UK Government recognises this would cause complexity, confusion, and undermine the mandate of existing regulators. Instead, the UK Government plans to support existing regulators to apply the principles using the powers and resources available to them. The regulator-led approach proposed by the UK Government has received support from the industry during the consultation on the AI Regulation Policy Paper.

The Government expects regulators to:

  • In the next 6 months assess and apply the principles to AI use cases falling within their remit, prioritising principles according to the needs of their sector.
  • In the next 6-12 months issue new guidance or update existing guidance to businesses on how the principles interact with existing legislation and to illustrate what compliance should look like.
  • Support businesses operating within the remits of multiple regulators by collaborating and producing clear and consistent guidance.

While some regulators have already published detailed guidance on the use of AI within their remit (such as the ICO's Guidance on AI and data protection or Explaining decision made with AI), we expect that regulators will update their existing guidance or publish new guidance to take into account the five principles proposed by the UK Government in the White Paper.

A centralised function:

Following feedback from the industry that existing regulators might not have the capacity to ensure consistent and coordinated approach (especially for businesses operating across the remit of multiple regulators), the UK Government proposed a creation of central functions to support the proposed framework, including by:

  • Developing a central monitoring, evaluation and risk assessment framework,
  • Creating a central guidance to businesses looking to navigate the AI regulatory landscape in the United Kingdom,
  • Offering a multi-regulator AI sandbox, and
  • Supporting cross-border coordination with other countries.

While no official announcement has been made, the UK Government's Office for Artificial Intelligence (a unit within the Department for Science, Innovation and Technology) is likely to take on at least some of these central functions.

What does the White Paper mean for generative AI?

Unlike the European Parliament's version of the EU AI Act proposal from June 2023, the White Paper mentions generative AI only sparingly which is surprising given that the White Paper was published in March 2023 when lawmakers around the world were already becoming increasingly concerned about the use of generative AI.

However, there are two key takeaways relating to the use of generative AI:

  1. The UK Government plans to clarify the relationship between intellectual property law and generative AI to provide confidence to businesses. In particular, the UK Government is working with users and rights holders on a code of practice on copyright and AI which the UK Government expects parties to enter into on a voluntary basis.
  2. The UK Government plans to establish a regulatory sandbox for AI which is, following an initial pilot, supposed to be expanded to AI innovations covering multiple sectors like generative AI models.

UK regulators have also published statements and guidance relating the implications of generative AI, including:

What should businesses be doing now?

  1. Corporate boards should consider being able to demonstrate board-level oversight of AI risks and ask the management to put AI on the agenda of board meetings to receive both management's views and perspective from outside advisors.
  2. Management should continue to consider who is responsible for AI governance within their organisation. They should also implement policies that govern the development and use of AI, which align with the five principles proposed in the White Paper.
  3. Businesses should keep a register of their use of AI tools and systems to understand how AI is used within their organisation.
  4. Organisations should watch legislative and regulatory developments in the AI realm applicable to their business, such as the proposed EU's AI Act and AI Liability Directive, and any new or updated guidance from UK regulators (which is expected within the next 6 to 12 months).

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe