On 20 May 2020, the UK Information Commissioner's Office ("ICO") and The Alan Turing Institute ("Turing") (the UK's national institute for data science and artificial intelligence) published detailed guidance for organisations that are using artificial intelligence ("AI") to make, or support the making of, decisions about individuals.
Ensuring AI is being used legally and ethically in a controlled and consistent manner throughout an organisation is becoming an increasingly challenging, but vital, task for businesses worldwide in light of growing number, scope and sophistication of AI solutions being adopted and the broadening international landscape of legal and regulatory guidance to comply with.
This latest guidance, which is published in response to the UK Government's AI Sector Deal, assists organisations with establishing frameworks and systems to explain decisions made using AI to the individuals affected by those decisions.Though the guidance is not a statutory code of practice, it represents "good practice" and is particularly useful for those organisations seeking to implement frameworks and systems for AI that uses personal data in a way that complies with the European General Data Protection Regulation ("GDPR") and UK Data Protection Act 2018.
The guidance is split into three parts aimed at different audiences:
Part | Target Audience |
1: The basics of explaining AI | Compliance teams and data protection officers |
2: Explaining AI in practice | Technical teams |
3: What explaining AI means for your organisation | Senior management |
Part 1: The basics of explaining AI
Part 1 of the guidance is aimed at an organisation's compliance teams, including the data protection officer(s) as well as all staff members involved in the development of AI systems. Part 1 outlines some key terms and concepts associated with AI, the legal framework applicable to explaining decisions made or supported by AI, the benefits and risks of (not) explaining such decisions and the different types of explanations that organisations can provide.
The guidance distinguishes between AI-enabled decisions which are:
For solely automated decisions that produce legal or similarly significant effects on an individual (i.e. something that affects an individual's legal status, rights, circumstances or opportunities, e.g. a decision about a loan), the guidance draws on specific GDPR requirements and directs organisations to:
Regardless of whether AI-assisted decisions are solely automated, or whether they involve a "human in the loop", the guidance makes it clear that so long as personal data is used in the AI system, organisations must still comply with the GDPR's processing principles. In particular, the guidance concentrates on the processing principles of fairness, transparency and accountability as being of particular relevance and provides advice for organisations on how compliance with these statutory obligations can be achieved in practice. For instance, the guidance makes it clear that individuals impacted by AI-assisted decisions should be able to hold someone accountable for those decisions, specifically stating that "where an individual would expect an explanation from a human, they should instead expect an explanation from those accountable for an AI system". An important part of this accountability is for organisations to ensure adequate procedures and practices are in place for individuals to receive explanations on the decision-making processes of AI systems which concern them.
When it comes to actually explaining AI-assisted decisions, the guidance identifies the following six main types of explanation:
The guidance suggests that there is no "one-size-fits-all" explanation for all decisions made or supported by AI. Organisations should consider different explanations in different situations and use a layered approach allowing for further details to be provided if they are required. With this in mind, the guidance notes that some of the contextual factors organisations should consider when constructing an explanation are:
Part 2: Explaining AI in practice
Part 2 of the guidance considers the practicalities associated with explaining AI-assisted decisions to affected individuals. It considers how organisations can decide upon the appropriate explanations for their AI decisions, how those organisations can choose an appropriate model for explaining those decisions and how certain tools may be used to extract explanations from less interpretable models. This part of the guidance is primarily aimed at technical teams, but may also be useful to an organisation's compliance teams and data protection officer.
The guidance sets out suggestions on six tasks that will help organisations design explainable AI systems and deliver appropriate explanations according to the needs and skills of the audiences they are directed at. "Annexe 1" of the guidance takes the six tasks a step further, by providing a practical example of how these tasks can be applied in the scenario of a healthcare organisation presenting an explanation of a cancer diagnosis to an affected patient.
The six tasks identified are:
Part 3: What explaining AI means for your organisation
The final part of the guidance is primarily aimed at senior executives within organisations and covers the roles, policies, procedures and documentation that should be put in place to ensure that organisations are able to provide meaningful explanations to individuals who are subject to AI-assisted decisions.
Organisations should identify the specific people who are involved in providing explanations to individuals about AI-assisted decisions: the product managers, AI development teams, implementers, compliance teams and senior management. The guidance makes it clear that all individuals involved in the decision-making pipeline (from design through to implementation of the AI model) have a part to play in delivering explanations to those individual's affected by the AI model's decisions. An overview of the expectations with respect to providing explanations associated with some of the specific roles within an organisation is also provided in the guidance.
The guidance acknowledges that not every organisation will build their own AI systems and that many may procure these systems from third party vendors. Nonetheless, even if the organisation is not involved in designing and building the AI system or collecting the data for the system, it should ask questions to the third party vendors about the AI system in order to meet its obligations as a data controller and to be able to explain the AI-supported decisions to affected individuals. Appropriate procedures should be in place to ensure that the vendor has taken the necessary steps for the controller to be able to explain the AI-assisted decisions.
An organisation's policies and procedures should cover the explainability considerations contained in the guidance. This is explicitly outlined in the guidance, which states "in short, they [policies and procedures] should codify what’s in the different parts of this guidance for your organisation". Guidance is provided on some of the procedures an organisation should put in place (such as in relation to training), but also information on the necessary documentation required to effectively demonstrate explainability of an AI system (such as that legally required under the GDPR).
Conclusion
The guidance from the ICO and the Turing is a positive step towards helping organisations achieve regulatory compliance with data protection legislation in a complex and ever evolving area. After the ICO's earlier draft guidance for consultation on the AI auditing framework, this new guidance is an additional welcome step from the ICO for compliance and technical teams. It is encouraging to see the ICO and the Turing working together to bridge the gap between the regulatory requirements and the technical solutions that can be adopted to meet such requirements.
Becoming compliant and ensuring a consistent approach is being taken to maintain compliance with the growing legal and regulatory landscape in this area is becoming an increasingly difficult but important task for organisations, given the ever expanding number, scope and sophistication of AI solutions being implemented. Following the guidance will not only help organisations address some of the legal risks associated with AI-assisted decision making, but also some of the ethical issues involved in using AI systems to make decisions about individuals.
Mayer Brown is a global legal services provider comprising associated legal practices that are separate entities, including Mayer Brown LLP (Illinois, USA), Mayer Brown International LLP (England & Wales), Mayer Brown Hong Kong LLP (a Hong Kong limited liability partnership) and Tauil & Chequer Advogados (a Brazilian law partnership) (collectively, the “Mayer Brown Practices”). The Mayer Brown Practices are established in various jurisdictions and may be a legal person or a partnership. PK Wong & Nair LLC (“PKWN”) is the constituent Singapore law practice of our licensed joint law venture in Singapore, Mayer Brown PK Wong & Nair Pte. Ltd. Mayer Brown Hong Kong LLP operates in temporary association with Johnson Stokes & Master (“JSM”). More information about the individual Mayer Brown Practices, PKWN and the association between Mayer Brown Hong Kong LLP and JSM (including how information may be shared) can be found in the Legal Notices section of our website.
“Mayer Brown” and the Mayer Brown logo are trademarks of Mayer Brown.
Attorney Advertising. Prior results do not guarantee a similar outcome.