Over the past decade, it has become increasingly commonplace for employers to use algorithmic decision-making tools in employment. Employers use a wide range of tools to assist them in employment decision-making and performance management, including:
In 2021, the US Equal Employment Opportunity Commission ("EEOC") launched an agency-wide “Artificial Intelligence and Algorithmic Fairness Initiative” to ensure that the use of software—including AI, machine learning, and other emerging technologies—in hiring and other employment decisions complies with federal civil rights laws. The EEOC is the primary federal agency responsible for enforcing federal non-discrimination laws. In May 2022, the EEOC issued guidance regarding compliance with the federal Americans with Disabilities Act and the use of software, algorithms and artificial intelligence in making employment decisions: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees ("ADA AI Guidance"). The EEOC’s ADA AI Guidance made clear that employers can be liable for violating the Americans with Disabilities Act if their use of software, algorithms and artificial intelligence results, for example, in the failure to properly provide or consider an employee’s reasonable accommodation request or in the intentional or unintentional screening-out of applicants with disabilities even if these applicants can perform the position with a reasonable accommodation.
AI Disparate Impact Guidance
Last month, as part of its continuing focus on AI, the EEOC issued its second set of guidance regarding employer use of AI. The EEOC’s non-binding technical assistance document, titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” (“AI Disparate Impact Guidance”), provides employers with guidance regarding the application of federal non-discrimination laws in connection with an employer’s use of automated systems, algorithms and artificial intelligence (“AI”) tools (referred to as “algorithmic decision-making tools”) in making employment decisions.
The EEOC’s AI Disparate Impact Guidance focuses on one aspect of Title VII of the Civil Rights Act’s non-discrimination provisions—the prohibition on “disparate” or “adverse” impact discrimination resulting from the use of algorithmic decision-making tools. Disparate or adverse impact refers to an employer’s use of a facially neutral employment selection procedure or test that has a disproportionately large negative impact on individuals based on characteristics protected under Title VII, such as race, color, religion, sex, or national origin.
Highlights from the Guidance:
Looking Forward
The use of AI and other software in employment and other areas is becoming a focal point of regulatory scrutiny. On April 25, 2023, the EEOC issued a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems with officials from the Department of Justice, the Consumer Financial Protection Bureau and the Federal Trade Commission reiterating their “resolve to monitor the development and use of automated systems and promote responsible innovation” and pledging “to vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.” In addition, a number of states, including Illinois, Maryland, and New York City have passed laws regarding employer use of AI in the workplace.
Employers who use or are considering the use of algorithmic decision-making tools in employment should be mindful and intentional about their design, implementation and use—and should ensure that they are up-to-date on regulatory and other developments in this rapidly evolving area. Employers should actively engage with third parties that design, develop, deploy and/or administer the tools they are using to mitigate potential adverse impact and regularly self-audit the use of these tools to determine whether the technology is being used in a way that could result in discrimination. Multi-national employers should also keep in mind that Title VII can apply to US citizens who primarily work outside the United States if they are employed by a US employer or by a foreign corporation controlled by a US employer.
Mayer Brown is a global legal services provider comprising associated legal practices that are separate entities, including Mayer Brown LLP (Illinois, USA), Mayer Brown International LLP (England & Wales), Mayer Brown Hong Kong LLP (a Hong Kong limited liability partnership) and Tauil & Chequer Advogados (a Brazilian law partnership) (collectively, the “Mayer Brown Practices”). The Mayer Brown Practices are established in various jurisdictions and may be a legal person or a partnership. PK Wong & Nair LLC (“PKWN”) is the constituent Singapore law practice of our licensed joint law venture in Singapore, Mayer Brown PK Wong & Nair Pte. Ltd. Mayer Brown Hong Kong LLP operates in temporary association with Johnson Stokes & Master (“JSM”). More information about the individual Mayer Brown Practices, PKWN and the association between Mayer Brown Hong Kong LLP and JSM (including how information may be shared) can be found in the Legal Notices section of our website.
“Mayer Brown” and the Mayer Brown logo are trademarks of Mayer Brown.
Attorney Advertising. Prior results do not guarantee a similar outcome.