April 28. 2023

US FTC, DOJ, EEOC, and CFPB Release Joint Statement on AI, Discrimination and Bias

Share

The upshot, for busy people:

  • On April 25, 2023, the Federal Trade Commission (FTC), Department of Justice Civil Rights Division (DOJ), Equal Employment Opportunity Commission (EEOC), and the Consumer Financial Protection Bureau (CFPB) issued a joint statement (Joint Statement) that each of them is now, and will be, looking at possible discrimination involving AI systems and other automated processes.
  • The Joint Statement summarizes each department’s and agency’s work on artificial intelligence (AI) and discrimination to date and flags their concerns regarding potential discrimination arising from (a) data sets that train AI systems, (b) opaque “black box” models that make anti-bias diligence difficult, and (c) the risk that third parties may use models in unforeseen ways.
  • The Joint Statement notes that existing legal authorities apply to the use of AI tools just as they do to other conduct.
  • Like the Biden Administration’s Blueprint for an AI Bill of Rights, the Joint Statement does not itself impose any new legal obligations on companies but, rather, helps to clarify the priorities of multiple agencies.

The Joint Statement

The Biden Administration has been focused on AI as it matures and expands across a broad range of industries. For example, in October 2022, the Administration released the Blueprint for an AI Bill of Rights, which set forth a non-binding framework for how agencies should approach consumer issues related to AI, within the purview of each agency’s statutory authorities. That document outlined five core “rights”: (1) safe and effective systems; (2) algorithmic discrimination protections; (3) data privacy; (4) notice and explanation; and (5) human alternatives, consideration, and fallback. The Administration already has begun acting on these priorities, including the release of an April 2023 request for comment by the National Telecommunications and Information Administration (NTIA) on policies designed to assure stakeholders that AI systems are “legal, effective, ethical, safe, and otherwise trustworthy.” Against this backdrop, state privacy laws are also increasingly regulating AI, with comprehensive privacy laws in Virginia, Colorado, and Connecticut (and, soon, regulations in California), requiring opt-out rights and transparency regarding the use of AI, along with a number of pending bills to further expand the patchwork of privacy laws in other states, with AI rights included.

The Joint Statement should be read in that context, as a focused effort by agencies looking at the same (or very similar) problems. The opening paragraphs explain that “responsible innovation is not incompatible” with federal laws protecting civil rights, fair competition, consumer protection, and equal opportunity. Referring to “automated systems,” the Joint Statement defines the term “broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.” The agencies recognize that automated systems “offer the promise of advancement” but also risk “perpetuating unlawful bias, automating unlawful discrimination, and producing other harmful outcomes.”

After that opening explanation, the Joint Statement provides an overview of each agency’s relevant statutory authority and AI-specific activities:

  • The FTC—charged with preventing unfair or deceptive acts or practices (UDAP)—highlighted its report on online innovation, including AI, as well as recent alerts warning companies against making deceptive claims regarding its AI-enabled products and against developing AI systems that introduce bias.
  • The CFPB—charged with enforcing many consumer financial laws—referred to its May 2022 circular reminding companies that the Equal Credit Opportunity Act requires companies to identify the specific reasons for adverse credit decisions, even those generated by AI systems. The joint statement repeats the statement in the circular that “the fact that the technology used to make credit decisions is too complex, opaque, or new is not a defense for violating” anti-discrimination laws.
  • The DOJ—charged with enforcing constitutional and statutory civil rights protections—referenced a statement of interest it filed in a federal court suit explaining its view that the Fair Housing Act applies to algorithm-based tenant screening.
  • The EEOC—charged with enforcing employment discrimination laws—referenced existing enforcement activities on discrimination related to AI and automated systems, and released a technical assistance document explaining how the Americans with Disabilities Act applies to employment decisions made using AI systems.

The Joint Statement wraps up with an overview of three areas in which AI systems could result in liability:

  • Datasets could import historical biases or lead to discriminatory outcomes if they are correlated with protected classes.
  • Opaque models can make it difficult to explain model outcomes and to otherwise assess if automated system is operating in an unfair or discriminatory manner.
  • Unanticipated uses may result in consumer harm if systems are designed with “flawed assumptions about its users, relevant conduct, or the underlying practices or procedures it may replace.”

What does this mean for my business?

With this action, the Blueprint for an AI Bill of Rights, and other actions by other agencies, the Biden Administration is adopting a whole-of-government approach to addressing what it views as potentially problematic issues related to AI. And, of course, companies operating across borders will need to consider the newly enacted or proposed AI legislation in major markets across the world. So if you use AI systems in your decision-making, you should take note and consider assessing whether your valuable AI tools may be exposing you to unnecessary legal risk, including discrimination and bias.

If you’re just now integrating AI tools into business processes, this federal guidance may help to proactively justify AI compliance when the laws and regulations are not yet fully developed and reduce business risks by building systems with compliance in mind (i.e., privacy by design). In practice, this often means doubling down on AI governance, including having a governance framework that scrutinizes data inputs, model outputs, and documentation. With an appropriate system, you can make sure that the people running your company are in control of your AI tools and that those tools produce legally defensible outcomes you can explain to stakeholders, customers, employees, applicants, and (if necessary) regulators or enforcement attorneys, or the courts.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe