Machine learning algorithms and other applications of artificial intelligence are making more and more day-to-day business decisions. Thirty years ago, if an entrepreneur wanted a loan for her startup, she'd walk into a bank and talk to a loan officer. Ten or twenty years ago, she might apply online, but a loan officer would still have final approval. Now, a machine learning algorithm will often make the call. Same for job applicants: thirty years ago, they'd apply by mail or in person. Ten or twenty years ago, online, to a human decision maker. And now, their applications feed into machine learning systems that make the key calls.
Our legal system is evolving. It has elaborate rules governing how to prove what people did and when. It has long-established assumptions about who legal actors are and how to find their intent. But these rules assume that human are the last step in a decision process. When a machine learning system or another form of artificial intelligence is the final step in the decision process, these assumptions break down.
At a basic level, if our entrepreneur doesn't get a loan today, she can't ask the machine why. Same for the job applicant: the machine won't have an answer. This is true even if the bank or employer keeps a human in the loop at the end. That person will only be able to say that the machine made a determination.
And just as that answer isn't likely to satisfy the entrepreneur or the job applicant, it isn't likely to satisfy a judge, jury or litigation opponent either.
The need to—and the difficulty of—explaining how business-critical artificial intelligence systems work is a key new challenge for companies that rely on them. "Artificial intelligence" brings to mind sentient robots, such as the heroic Mr. Data of Star Trek and numerous robotic villains. But today, in reality, it means software systems applying complex mathematics to predict outcomes based on data fed into them. Generally, AI users will want to prove that the business decisions implement policy choices made by company management and the choice of algorithms and parameters by data scientists and programmers based on those policy choices. The AI is not a decision maker but merely a mechanism for implementing business decisions.
Businesses now recognize that business-critical tools that only highly trained experts can explain create regulatory and litigation risk. Regulators aren't likely to be satisfied by pointing to a machine learning tool than to explain a rejected application. And most litigation revolves around business decisions. The parties will take and defend depositions of the key human decision makers. They will collect, review and produce the documents these decision makers created. These procedures are well-defined. But there's no way to depose an artificial intelligence tool, so a company that allows its tool to be seen as the decision maker will have difficulty defending the decisions. And the inputs and outputs that reflect its operation will not be decipherable by judges, juries or non-experts.
As a result, many companies are focusing on explainability. Explainability, in general terms, has three aspects:
- Transparency: easy identification of the important factors in the tool's operation;
- Interpretability: easy identification and explanation of how the tool weights those factors and derives them from its input data; and
- Provenance: easy identification of where input data originated.
When an AI tool has all three aspects, a company can explain its results to a regulator, judge or jury in plain language. That is, it can say, "The tool came to this result because it took these inputs, applied these weights to them and derived this result." To achieve that, we recommend that:
- The management team clearly specifies to its data scientists and technicians how the company wants to tool to work, recognizing that those specifications are, in fact, the business decisions;
- The tool is built to store the right facts about how it arrived at results in manner approved by your e-discovery/information governance (EDIG) team;
- The company employs “AI sustainers” to continually test and modify the tool to keep it working as the management team intended; and
- The company employs “AI explainers,” people who know how to explain the tool’s results.
In a litigation, then, explainability fades into defensibility. The “AI explainers” within the company will be able to use data retained by the EDIG group to explain how the decisions reflect corporate policy. For plaintiffs, it may be very difficult to find an expert who can speak with authority on an extremely complex proprietary AI tool, at least compared to a data scientist, an AI explainer or an AI sustainer who was part of the team building and sustaining the tool. From a litigation perspective, explainability allows humans to take the witness stand to defend business decisions implemented through an AI tool instead of having the plaintiff’s counsel claim that the company is responsible for how a villainous robot abused the plaintiff. The issues shift to the more defensible questions of whether management chose the right policies and the technicians configured the tool correctly.