Parties Disagreed on the Outcome of a Three-Year Electronic Discovery Dispute
Starting in 2013, the Commissioner of Internal Revenue has been embroiled in an electronic discovery dispute that may have finally been resolved in Dynamo Holdings Limited Partnership v. Commissioner of Interval Revenue, 143 T.C. 183 (2014).
Petitioner in the case sought court permission to use predictive coding, a type of technology assisted review (TAR), of electronic documents in response to the Commissioner’s ESI requests. That issue was now-famously memorialized as one of the early court opinions that officially endorsed the use of predictive coding as an “expedited and efficient form of computer-assisted review.” Id. at 190.
It’s now just about two years later, and the parties have reached an impasse after the plaintiff ESI was produced using predictive coding. So how did it go? Let’s review the latest Dynamo court opinion dated July 13, 2016 to see each party’s position in the aftermath.
“The Quality of That Response is Now Before Us.”
After the plaintiff ESI production was delivered, the Commissioner filed a motion under Tax Court Rule 72(b)(2) to compel the production of certain additional documents. The Commissioner contended certain documents that were listed under a preliminary Boolean search were not part of the final production from predictive coding. Petitioner objected, contending he was wrong, and that the documents not included in the production were non-responsive or outside the relevant time frame and/or scope.
The court considered the quality of the predictive coding response by examining the process and method the parties used to produce the electronically stored information.
How Did Predictive Coding Work in this Case?
First, the Commissioner requested a Boolean search, and provided a list of 76 search terms to run against the processed data contained within 406,939 documents. Plaintiffs sent a table of such results, which included the number of documents with “individual term hits,” “documents with term hits,” and “individual documents containing only a single term.”
Second, the parties selected and reviewed two seed sets of 1,000 randomly chosen documents each. The Commissioner identified out of this 1,000 which documents were responsive and which were not. This is part of the coding training to identify responsive documents.
Third, the parties decided on a recall rate, which could be from 65 to 95 percent. The higher the recall rate, the greater amount of both responsive and non-responsive documents would be produced. The Commissioner chose to train the predictive coding model to return 95 percent of responsive documents. This culled the production down to 180,000 documents total and included a relevancy score for each. The Commissioner then identified 5,796 documents he wished to retain.
Was There a Shortcoming in the Response?
The Commissioner alleged the final production was missing 1,353 documents that the Boolean search had identified, but that were not part of the predictive coding production. Petitioners objected, noting that 440 of those documents were produced, bringing the number of possibly “missing” documents down to 920. Out of those 920 documents, the predictive coding model excluded 765 as non-responsive, and after sampling those documents, Petitioners noted that most predated or postdated the relevant time period. Petitioners contended the predictive coding worked correctly, and that the production was complete.
The court first noted that since the Commissioner chose the recall rate of 95% (and a precision rate of 3%), that model will produce more non-responsive documents (“false positives”) than a lower percentage recall rate. The recall rate and precision rate are numbers that are inherently at odds. There is a trade off—the higher the recall rate, the lower the precision rate. In this case where the Commissioner chose the high 95% recall rate, this offered a low precision rate. Therefore, the model is expected to return more non-responsive documents.
The Court Busts Two Common Myths About Predictive Coding in Dynamo Holdings
The court noted that this was a low recall rate (chosen by the Commissioner) and that the model is not perfect. But should relief be afforded? The court busted two myths in analyzing this issue.
The court first busted the “myth of human review,” as if human or manual review was the height of accuracy, when in reality, it’s is anything but. Research has shown that people (even attorneys!) make regular mistakes when it comes to judging relevance and responsiveness in a set of documents. When two people review the same set of documents for relevance and responsiveness, they tend to disagree on more than half of the responsive documents.
The second myth the court opines on is the myth of the perfect response, which is not required under the Federal Rules of Civil Procedure. FRCP 26(g) requires only a “reasonable inquiry” to identify proper discovery responses. Therefore, the court concluded that Petitioners did, in fact, satisfy the requirements of FRCP when they used predictive coding to prepare their responses. The court denied Respondent’s motion to compel.
The plaintiff eDiscovery experts at ILS use predictive coding and technology assisted review while working closely with their clients and opposing parties for a cost-effective and efficient means of producing relevant documents in civil litigation. Reach out to us to learn more about our electronic discovery services.