juin 02 2020

The ICO and The Alan Turing Institute publish guidance on explaining decisions made with AI

Share

On 20 May 2020, the UK Information Commissioner's Office ("ICO") and The Alan Turing Institute ("Turing") (the UK's national institute for data science and artificial intelligence) published detailed guidance for organisations that are using artificial intelligence ("AI") to make, or support the making of, decisions about individuals.

Ensuring AI is being used legally and ethically in a controlled and consistent manner throughout an organisation is becoming an increasingly challenging, but vital, task for businesses worldwide in light of growing number, scope and sophistication of AI solutions being adopted and the broadening international landscape of legal and regulatory guidance to comply with.

This latest guidance, which is published in response to the UK Government's AI Sector Deal, assists organisations with establishing frameworks and systems to explain decisions made using AI to the individuals affected by those decisions.Though the guidance is not a statutory code of practice, it represents "good practice" and is particularly useful for those organisations seeking to implement frameworks and systems for AI that uses personal data in a way that complies with the European General Data Protection Regulation ("GDPR") and UK Data Protection Act 2018.

The guidance is split into three parts aimed at different audiences:

Part  Target Audience
1: The basics of explaining AI Compliance teams and data protection officers
2: Explaining AI in practice Technical teams
3: What explaining AI means for your organisation    Senior management


Part 1: The basics of explaining AI

Part 1 of the guidance is aimed at an organisation's compliance teams, including the data protection officer(s) as well as all staff members involved in the development of AI systems. Part 1 outlines some key terms and concepts associated with AI, the legal framework applicable to explaining decisions made or supported by AI, the benefits and risks of (not) explaining such decisions and the different types of explanations that organisations can provide.

The guidance distinguishes between AI-enabled decisions which are:

  1. solely automated – e.g. an online loan application with an instant result; and
  2. with "human in the loop", notably where there is meaningful human involvement in reaching the AI-assisted decision – e.g. a CV screening software that provides recommendations to a recruitment team, but decisions about who (or who not) to invite to an interview are ultimately made by a human.

For solely automated decisions that produce legal or similarly significant effects on an individual (i.e. something that affects an individual's legal status, rights, circumstances or opportunities, e.g. a decision about a loan), the guidance draws on specific GDPR requirements and directs organisations to:

  • be proactive in giving individuals meaningful information about the logic involved, as well as the significance and envisaged consequences of, any AI–assisted decisions affecting those individuals (Articles 13 and 14);
  • give individuals the right to access meaningful information about the logic involved, as well as the significance and envisaged consequences of, any AI-assisted decisions affecting those individuals (Article 15); and
  • give individuals at least the right to express their point of view, and in certain instances object to / contest the AI-assisted decision and obtain human intervention (Articles 21 and 22).

Regardless of whether AI-assisted decisions are solely automated, or whether they involve a "human in the loop", the guidance makes it clear that so long as personal data is used in the AI system, organisations must still comply with the GDPR's processing principles. In particular, the guidance concentrates on the processing principles of fairness, transparency and accountability as being of particular relevance and provides advice for organisations on how compliance with these statutory obligations can be achieved in practice. For instance, the guidance makes it clear that individuals impacted by AI-assisted decisions should be able to hold someone accountable for those decisions, specifically stating that "where an individual would expect an explanation from a human, they should instead expect an explanation from those accountable for an AI system". An important part of this accountability is for organisations to ensure adequate procedures and practices are in place for individuals to receive explanations on the decision-making processes of AI systems which concern them.

When it comes to actually explaining AI-assisted decisions, the guidance identifies the following six main types of explanation:

  1. Rationale explanation: an explanation of the reasons that led to an AI-assisted decision, which are to be delivered in an accessible and non-technical way.
  2. Responsibility explanation: an explanation of who is involved in the development, management and implementation of the AI system, and who to contact for a human review of an AI-assisted decision.
  3. Data explanation: an explanation of what data has been used in a particular decision and how such data was used.
  4. Fairness explanation: an explanation of the design and implementation steps taken across an AI system to ensure that the decisions it supports are generally unbiased and fair (including in relation to data used in the AI system), and whether or not an individual has been treated equitably.
  5. Safety and performance explanation: an explanation of the design and implementation steps taken across an AI system to maximise the accuracy, reliability, security and robustness of its decisions and behaviours.
  6. Impact explanation: an explanation of the design and implementation steps taken across an AI system to consider and monitor the impacts that the use of an AI system and the decisions it supports has, or may have, on individuals and on wider society.

The guidance suggests that there is no "one-size-fits-all" explanation for all decisions made or supported by AI. Organisations should consider different explanations in different situations and use a layered approach allowing for further details to be provided if they are required. With this in mind, the guidance notes that some of the contextual factors organisations should consider when constructing an explanation are:

  • the sector they operate and deploy the AI system within – e.g. an AI system used for diagnosis in the healthcare sector might need to provide more detailed explanation of the safety, accuracy and performance of the AI system;
  • the impact on the individual – e.g. an AI system that sorts queues in an airport might have a lower impact on the individual, especially when compared to an AI system deciding whether an individual should be released on bail;
  • the data used, both to train and test the AI model, but also the input data at the point of decision– e.g. where social data is used, individuals receiving a decision might want to learn from the decision to adapt their behaviour, possibly to make changes if they disagree with the outcome of the decision. Where biophysical data is used, although the individual will be less likely to disagree with the AI systems' decision, the individuals may prefer to be reassured about the safety and reliability of the decision and know what the outcome means for them;
  • the urgency of the decision – e.g. where urgency is a factor, the individual may wish to be reassured about the safety and reliability of the AI model and understand what the outcome means for them; and
  • the audience the explanation is being provided to – e.g. is the audience the general public, experts in the particular field or the organisation's employees? Do the recipients require any reasonable adjustments to receive the explanation? Generally, the guidance suggests a cautious approach is adopted and that "it is a good idea to accommodate the explanation needs of the most vulnerable individuals". If the audience is the general public, it may be fair to assume that the level of expertise surrounding the decision would be less than that of a smaller audience of experts in the particular field. Consequently, this would need to be considered when delivering the explanation.

Part 2: Explaining AI in practice

Part 2 of the guidance considers the practicalities associated with explaining AI-assisted decisions to affected individuals. It considers how organisations can decide upon the appropriate explanations for their AI decisions, how those organisations can choose an appropriate model for explaining those decisions and how certain tools may be used to extract explanations from less interpretable models. This part of the guidance is primarily aimed at technical teams, but may also be useful to an organisation's compliance teams and data protection officer.

The guidance sets out suggestions on six tasks that will help organisations design explainable AI systems and deliver appropriate explanations according to the needs and skills of the audiences they are directed at. "Annexe 1" of the guidance takes the six tasks a step further, by providing a practical example of how these tasks can be applied in the scenario of a healthcare organisation presenting an explanation of a cancer diagnosis to an affected patient.

The six tasks identified are:

  1. Select priority explanations by considering the sector / domain, use case and impact on the individual. The guidance notes that prioritising explanations is not an exact science and that there will be instances in which some individuals may benefit from explanations which deviate from explanations sought from the majority of people and which may not therefore have been prioritised;
  2. Collect and pre-process data in an explanation-aware manner. Organisations should be conscious of the risks of using the data which they collect, including ensuring that the data is representative of those whom decisions are being made about and that it does not reflect past discrimination;
  3. Build the AI system to ensure that relevant information can be extracted from it for a range of explanation types. The guidance acknowledges that this is a complex task and that not all AI systems can use straightforwardly interpretable AI (e.g. complex machine learning techniques that classify images, recognise speech or detect anomalies). For instance, where organisations use opaque algorithmic techniques or "black box" AI, they should thoroughly consider beforehand the potential risks and use these techniques alongside supplemental interpretability tools (the guidance provides a few technical examples of such tools);
  4. Translate the rationale of the AI system's results into useable and easily understandable reasons. This considers the statistical output of the AI system and how tools including text, visual media, graphics, tables or a combination can be used to present explanations;
  5. Prepare implementers (i.e. the "humans in the loop") to deploy the AI system. Organisations should provide appropriate training to implementers to prepare them to use the AI system's results fairly and responsibly, including training on the different types of cognitive biases and the strengths and limitations of the AI system deployed; and
  6. Consider how to build and present the explanations. Organisations should consider how to present their explanation in an easily understandable and, if possible, layered way. For instance, it is important that organisations make it clear how decision recipients can contact the organisation if they wish to discuss the AI-assisted decision with a human.

Part 3: What explaining AI means for your organisation

The final part of the guidance is primarily aimed at senior executives within organisations and covers the roles, policies, procedures and documentation that should be put in place to ensure that organisations are able to provide meaningful explanations to individuals who are subject to AI-assisted decisions.

Organisations should identify the specific people who are involved in providing explanations to individuals about AI-assisted decisions: the product managers, AI development teams, implementers, compliance teams and senior management. The guidance makes it clear that all individuals involved in the decision-making pipeline (from design through to implementation of the AI model) have a part to play in delivering explanations to those individual's affected by the AI model's decisions. An overview of the expectations with respect to providing explanations associated with some of the specific roles within an organisation is also provided in the guidance.

The guidance acknowledges that not every organisation will build their own AI systems and that many may procure these systems from third party vendors. Nonetheless, even if the organisation is not involved in designing and building the AI system or collecting the data for the system, it should ask questions to the third party vendors about the AI system in order to meet its obligations as a data controller and to be able to explain the AI-supported decisions to affected individuals. Appropriate procedures should be in place to ensure that the vendor has taken the necessary steps for the controller to be able to explain the AI-assisted decisions.

An organisation's policies and procedures should cover the explainability considerations contained in the guidance. This is explicitly outlined in the guidance, which states "in short, they [policies and procedures] should codify what’s in the different parts of this guidance for your organisation". Guidance is provided on some of the procedures an organisation should put in place (such as in relation to training), but also information on the necessary documentation required to effectively demonstrate explainability of an AI system (such as that legally required under the GDPR).

Conclusion

The guidance from the ICO and the Turing is a positive step towards helping organisations achieve regulatory compliance with data protection legislation in a complex and ever evolving area. After the ICO's earlier draft guidance for consultation on the AI auditing framework, this new guidance is an additional welcome step from the ICO for compliance and technical teams. It is encouraging to see the ICO and the Turing working together to bridge the gap between the regulatory requirements and the technical solutions that can be adopted to meet such requirements.

Becoming compliant and ensuring a consistent approach is being taken to maintain compliance with the growing legal and regulatory landscape in this area is becoming an increasingly difficult but important task for organisations, given the ever expanding number, scope and sophistication of AI solutions being implemented. Following the guidance will not only help organisations address some of the legal risks associated with AI-assisted decision making, but also some of the ethical issues involved in using AI systems to make decisions about individuals.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe