Plaintiffs increasingly rely on questionable expert testimony to fill gaps in their proof, trusting that the imprimatur of an expert will overcome otherwise insuperable deficiencies in their cases. In employment cases, where dubious expert testimony frequently is offered to show disparate impact or to bolster claims of disparate treatment, the success or failure of the defendant’s efforts to exclude expert testimony may determine the outcome of the case.
In a strongly worded opinion issued on April 9, the US Court of Appeals for the Sixth Circuit in EEOC v. Kaplan Higher Education Corp., No. 13-3408, 2014 WL 1378197 (6th Cir. Apr. 9, 2014), provided support for challenges to shoddy expert testimony by reinforcing the requirement under the Federal Rules of Evidence that the proponent of expert testimony produce evidence validating each part of the proposed expert’s methodology. As the Sixth Circuit put it, the EEOC “brought [a] case on the basis of a homemade methodology, crafted by a witness with no particular expertise to craft it, administered by persons with no particular expertise to administer it, tested by no one, and accepted by only the witness himself.”
Under Federal Rule of Evidence 702, before admitting expert testimony, a trial court must determine that the expert is qualified in the relevant field, that the testimony is relevant and that the expert’s methodology is reliable. As Kaplan demonstrates, the reliability requirement must be met with respect to both the expert’s method of gathering data and each step of the expert’s analysis.
In Kaplan, the EEOC sued Kaplan, a for-profit educational company, alleging that its practice of conducting a credit check before hiring certain staff violated Title VII of the federal Civil Rights Act because it had a disparate impact on African-American applicants. To demonstrate a disparate impact, the EEOC subpoenaed information from one of the third-party credit-check services used by Kaplan. That information, however, did not indicate the applicants’ race. In order to determine the race of applicants, the EEOC obtained records for the applicants from various states’ departments of motor vehicles. For most states, however, those records did not indicate the individual’s race, but did include a color copy of the applicant’s driver’s license.
In an effort to identify the race of each applicant from the picture on his or her driver’s license, the EEOC’s expert invented a process called “race rating” in which five “race raters” independently identified the race of each applicant based on a visual inspection of the photograph. If at least four of the five agreed, then the applicant was categorized as being a member of the identified race. Considering each of Daubert’s reliability criteria, the trial court held that the EEOC failed to demonstrate the reliability of this methodology, and the Sixth Circuit agreed.
First, the Sixth Circuit held that the EEOC had not demonstrated either that its expert’s race-rating method had been tested or that it had an acceptable rate of error. The EEOC had cross-checked the results of the expert’s methodology against other external sources for 57 out of 1,090 applicants. It found 95.7 percent agreement between the race-rating methodology and one source but only 80 percent agreement with another source. The Sixth Circuit implied that 80 percent agreement may not be sufficient “in a case where a few percentage points … might make the difference between significant liability and none” but held that, in any event, the cross-check sample size was insufficient to establish the reliability of the methodology, as the EEOC’s expert admitted.
Second, the court observed that the expert’s methodology, which was invented for purposes of litigation, had not been subject to peer review and publication, noting that “‘submission to the scrutiny of the scientific community is a component of good science, in part because it increases the likelihood that substantive flaws in methodology will be detected.’”
Third, the Sixth Circuit held that the race-rating methodology lacked standards controlling the technique’s operation because the five race-raters had no particular standard for classifying the applicant’s race, but simply relied on their visual impression of the applicant’s photograph. Notably, the court also chastised the EEOC because the race raters were given the applicants’ names but told to ignore them. As the Sixth Circuit emphasized, it was the EEOC’s burden, as the proponent of the proposed expert testimony, to prove that this did not impact the reliability of the methodology.
Fourth, the court pointed out that there was no evidence that the race-rating methodology was generally accepted in the scientific community. Indeed, as the court observed, the EEOC itself discourages employers from relying on visual identification of an individual’s race.
Finally, the Sixth Circuit agreed with the trial court that the sample of 1,090 applicants analyzed by the expert was not representative of the applicant pool as a whole because 23.8 percent of applicants in the sample failed the credit check whereas only 13.3 percent of applicants overall failed.
Kaplan is a bulwark for defendants besieged by result-oriented expert testimony. It confirms that defendants can attack such testimony on multiple fronts: the selection of data, each step in the expert’s analysis, the choice of individuals who conduct any part of the analysis and any error correction method or cross-check performed in an effort to validate the methodology. Kaplan is, of course, but one battle in a much longer war over the permissible use and scope of expert testimony, but it gives defendants reason to hope that the war is winnable.