2020年9月01日

Electronic Discovery & Information Governance – Tip of the Month: TAR Tools Have Some Explaining to Do

分享

Scenario

A company just received a charge of race discrimination filed by a rejected job applicant. In preparing to defend the charge, the general counsel learns that the company received thousands of applications for the position. To cull the applications, the company used artificial intelligence to review skills and qualifications identified by the applicants. The general counsel is satisfied that the results achieved in part using artificial intelligence were not discriminatory, but she wonders how best to prove this to defend against the race discrimination charge.

More Than Results

Artificial intelligence ("AI") isn't just a buzzword for eDiscovery practitioners. It's been a part of their standard toolset for a decade now, starting with early "predictive coding" tools. eDiscovery now supports a robust, mature selection of technology-assisted review ("TAR") AI technologies, with Continuous Active Learning being the most widely adopted. Practitioners thus have direct experience with the benefits (and limitations) of AI. This experience can be of great benefit to their clients and businesses.

As AI brings transformations to other sectors, though, it has become clear that results aren't all that matters. AI users need to be able to explain their results, whether they are recommendations for mortgage rates, hiring decisions, or flags on potentially privileged documents. A black box AI that takes inputs and produces conclusions without an explanation that makes sense to humans can be hard to defend. If its conclusion is wrong, there is no easy way to explain why or how to fix the problem. And if it's right, it can be hard to explain why. AI that is "explainable" is more likely to gain user trust and acceptance. And, for eDiscovery practitioners, a TAR process that is "explainable" is easier to defend than one that isn't.

What explainability features should we look for in a TAR tool? The National Institute of Standards and Technology (“NIST”) white paper “Four Principles of Explainable Artificial Intelligence,” published for comment on August 18, 2020, defines four fundamental qualities to look for in a TAR tool:

  • Explanation. It supplies evidence, support or reasoning for its results. That means an audit trail for the TAR tools' estimates, not just for those of human reviewers.
  • Meaningful. Its explanation must make sense to its intended audience. An explanation that works for data scientists with PhDs in statistics is not likely to be understandable to a job applicant or a judge or jury. Defending the tool means convincing the final decision-maker, not a scientific body.
  • Explanation Accuracy. Its explanation must be right. That doesn't mean that the tool’s results must be right (though, of course, that’s preferable). But if its conclusion is wrong, the tool should be able to explain why.
  • Knowledge Limits. It must flag cases where it is less confident in its conclusions. Put another way, the tool should make it easy for its users to know how sure it is. Users can then make an informed decision about how much to trust the tool and how much to rely on human review.

A Comparison

The white paper also makes the excellent point that this assessment is in essence a comparison. Rather than being graded on an absolute scale, the TAR tool should be compared to human reviewers and human review managers. Humans are not perfect on any of these principles; in fact, humans tend to perform relatively poorly on Explanation Accuracy and Knowledge Limits. For example, on Explanation Accuracy, the white paper points out that studies show that people will often get the right answer without being able to explain why. On Knowledge Limits, people are not perfect at assessing their own accuracy at mental tasks and examining their own decisions can make those decisions less accurate.

The white paper thus provides a framework to assess—and ultimately defend—TAR tools. It stresses that a well-designed and well-managed TAR tool, for all its complexity, may be easier to explain and defend than a process relying on unassisted human judgment.

及时掌握我们的最新见解

见证我们如何使用跨学科的综合方法来满足客户需求
[订阅]