Oktober 30. 2020

Electronic Discovery & Information Governance – Tip of the Month: Who Gets to Determine the Document Review Process?

Share

Scenario

A company engaged in a large civil litigation has received dozens of discovery requests from opposing counsel that require the production of responsive documents across a 10-year time span. The document collection from the negotiated custodians yields over 5 million documents. The parties agree to use search terms to isolate a subset of potentially responsive documents but are unable to agree on a method for identifying responsive documents within that universe. The company wants to use technology-assisted review (“TAR”) tools to assess potential responsiveness, but opposing counsel insists that the company produce all documents identified by the agreed-on search terms except those identified as privileged.

Traditional Review vs. Review Using TAR Tools

The evolution and widespread acceptance of TAR artificial intelligence (“AI”) tools has increased the options available to eDiscovery practitioners for identifying relevant and responsive materials in large-scale litigations. Traditionally, after applying a search term filter to narrow the population of documents requiring review, teams of lawyers would look at every remaining document to determine whether it was responsive to a discovery request and/or privileged. The problem with this type of manual human review is not only is  it extremely expensive and time-consuming, but the results are often inconsistent and wrought with human error.

While AI does not entirely solve these problems—or even completely eliminate the need for some human review—it can dramatically reduce the cost of responding to discovery requests in large litigations. The structure of a review utilizing TAR depends on the TAR tool selected. Predictive coding (“TAR 1.0”) requires subject matter experts to review a small subset of the document population for responsiveness. The machine learns from the coding decisions of the subject matter experts and is eventually able to populate responsiveness determinations across the unreviewed documents. In the end, only a small subset of the documents are reviewed by a human for responsiveness. Continuous Active Learning (“CAL” or “Tar 2.0”), on the other hand, continuously analyzes the results of human review and reorganizes the review population frequently so that lawyers are looking at the most relevant documents first. Eventually, the number of responsive documents dwindles and the remaining documents do not need to be reviewed. While more documents are manually reviewed using CAL than TAR 1.0, the number is often dramatically less than what would be manually reviewed without the tool.

If a party elects to use either CAL or TAR 1.0, a subset of documents that hit on agreed-on search terms are (1) not manually reviewed and (2) deemed not responsive to discovery requests (and therefore not produced to the requesting party). If the requesting party objects, can they compel the responding party to manually review each document or to produce all documents hitting on the search terms without a responsiveness review? A magistrate judge in the Northern District of Illinois recently said, “No.”

Livingston v. City of Chicago: The Responding Party Is Best Situated to Decide How to Search for and Produce Documents

On September 3, 2020, Magistrate Judge Young B. Kim of the Northern District of Illinois denied a request by the plaintiffs in Livingston v. City of Chicago to compel the City to use agreed-on search terms to identify responsive documents and then only to conduct a manual review for privilege.

The parties previously agreed on search terms that identified a universe of 192,000 emails within the City’s initial 1.5 million document collection. The City sought to use CAL to prioritize documents identified by the search terms for manual responsiveness review by lawyers. The plaintiffs argued that CAL would allow the City to leave unreviewed a large portion of the document collection, which included documents potentially responsive to its discovery requests. The plaintiffs expressed concern that the document review team would improperly train the TAR tool, resulting in incorrect responsiveness determinations and a premature end to the manual review. The plaintiffs further argued that if the City was allowed to use TAR tools, it should be required to implement TAR across the entire ESI collection without advanced search term culling and to use the TAR protocol proposed by the plaintiffs.

The court acknowledged that “there comes a point when, based on the reviewers’ coding decisions, the software establishes that the remaining documents in the queue are likely to be nonresponsive.” Livingston v. City of Chicago, No. 16-cv-10156 (N.D. Ill. Sep 3, 2020). Nevertheless, the court determined that the use of TAR was reasonable and proportional to the needs of the case. The court affirmed that TAR should not be held to a higher standard than other review methods. Potential for coding errors and incorrect responsiveness determinations exist regardless of the review methodology employed (i.e., whether a party elects to conduct a completely manual review process or decides to utilize TAR tools). The court found that the quality control applications the City intended to use in conjunction with CAL were sufficient. Id. Finally, the court agreed with the City that the federal rules do not require a responding party to conduct its responsiveness review in a manner dictated by a requesting party. Id. Affirming Sedona Principle Six, which provides that the party responding to the discovery request is best situated to determine how to search for and produce its own documents, the court held that the City’s disclosure of its intent to use TAR and how it intended to validate the results was “sufficient information to make the production transparent.” Id.

Conclusion

For parties responding to discovery requests, TAR tools create multiple options to structure the review and production of documents in an efficient and cost-effective manner. By affirming that the responding party is in the best position to determine how to meet its own discovery obligations, the Livingston case is helpful precedent for responding parties that wish to use AI technology without the approval—or against the wishes—of the party requesting discovery.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe