October 21, 2022

The American Data Privacy and Protection Act: Is Federal Regulation of AI Finally on the Horizon?

Share

Summary of Key Points

  • An omnibus federal privacy bill with significant bipartisan support is currently under congressional review and, if enacted, could dramatically increase oversight of how companies use artificial intelligence (“AI”) in their businesses.
  • This article discusses the bill, which, even if not enacted, provides valuable insights as to potential future regulation of AI.

 On July 20, 2022, the House Energy and Commerce Committee approved the proposed American Data Privacy and Protection Act (ADPPA) by a 53-2 margin.1 The bill would create national standards and safeguards for personal information collected by companies, including protections intended to address potentially discriminatory impacts of algorithms.

Although Congress is unlikely to enact the bill between now and the end of the year, the ADPPA represents progress toward a comprehensive data privacy law in the United States and is part of a growing trend calling for federal regulation of AI.2 Although several other federal bills addressing algorithmic decision-making have been introduced in recent years, the ADPPA is the first with significant bipartisan support and momentum, and the first to bundle provisions targeting algorithmic accountability and bias with provisions addressing data privacy and security issues.

Scope and Applicability

If enacted, the ADPPA would apply broadly to organizations and businesses operating in the United States. Key definitions in the proposed legislation include those noted below.

Covered entity is defined as an entity that “collects, processes, or transfers covered data and is subject to the Federal Trade Commission Act,” in addition to nonprofit organizations and common carriers. Though the definition is undeniably broad, the ADPPA identifies several different types of entities with additional obligations or exemptions. For certain obligations, covered entities are divided by “impact” (i.e., annual global revenue and number of data subjects affected by the entity’s operations) and “relationship with the data subject” (e.g., direct, third-party, or service provider relationships). By way of example, a “large” entity is defined as one with annual gross revenues of at least $250 million and that has collected covered data on more than 5 million individuals or devices or has collected sensitive covered data of more than 100,000 individuals or devices.

Covered data is defined as “information that identifies or is linked or reasonably linkable to one or more individuals, including derived data and unique identifiers.” Importantly, both employee data and publicly available data are excluded from this definition. Certain types of covered data are defined as sensitive covered data, which would include government identifiers (such as driver’s license or Social Security numbers) as well as “traditionally” sensitive information related to health, geolocation, financials, log-in credentials, race, and sexual history or identity. Sensitive data may also include other categories, such as television viewing data, intimate images, and “information identifying an individual’s online activities over time or across third-party websites or online services.”

A service provider is defined as “a person or entity that collects, processes, or transfers covered data on behalf of, and at the direction of, a covered entity for the purpose of allowing the service provider to perform a service or function on behalf of, and at the direction of, such covered entity.” Notably, the ADPPA would place direct obligations on service providers, including obligations not found in state privacy laws such as the prohibition of transferring data, except to another service provider, without affirmative express consent.

A third-party collecting entity is defined as “a covered entity whose principal source of revenue is derived from processing or transferring the covered data that the covered entity did not collect directly from the individuals linked or linkable to the covered data.” Third-party collecting entities would be required to provide consumers with notice of their activity and register with the Federal Trade Commission (“FTC”) if they process data pertaining to more than 5,000 individuals or devices that are reasonably linkable to an individual, as well as provide consumers the opportunity to require such entity delete a consumer’s covered data.

Oversight of AI and Algorithmic Decision-Making

With respect to AI, the ADPPA includes a provision—Section 207: Civil Rights and Algorithms—under which covered entities or service providers “may not collect, process, or transfer covered data in a manner that discriminates in or otherwise makes unavailable the equal enjoyment of goods or services on the basis of race, color, national origin, sex, or disability.” The two limited exceptions are a covered entity’s self-testing to prevent or mitigate unlawful discrimination and a covered entity’s efforts to diversify an applicant, participant, or customer pool.

Unlike most existing state privacy laws, Section 207 of the ADPPA would go a step further by requiring companies to evaluate certain artificial intelligence tools and submit those evaluations to the FTC.

Which entities are subject to Section 207? Covered entities and service providers that develop algorithms to collect, process, or transfer covered data or publicly available information would be required to conduct algorithm design evaluations prior to deploying the algorithms in interstate commerce. In addition, any large data holder that uses an algorithm “that may cause potential harm to an individual,” and uses such algorithm to collect, process, or transfer covered data, would also be required to conduct an algorithm impact assessment on an annual basis.

What is an “algorithm”? The bill defines a “covered algorithm” as “a computational process that uses machine learning, natural language processing, artificial intelligence techniques, or other computational processing techniques of similar or greater complexity that makes a decision or facilitate human decision-making with respect to covered data, including to determine the provision of products or services or to rank, order, promote, recommend, amplify, or similarly determine the delivery or display of information to an individual.” This definition is extremely broad and would cover almost any decision that utilizes automation as part of the decision-making process, even if the ultimate decision is made by a person.

What is an “algorithm design evaluation”? According to the proposed bill, covered entities and service providers must evaluate the design, structure, and data inputs of the algorithm to reduce the risk of potential discriminatory impacts. The draft legislation emphasizes that algorithm design evaluations must occur at the design phase, including any training data used to develop the algorithm. The ADPPA would also require the use of an external, independent researcher or auditor to conduct the evaluation to the extent possible. The covered entity or service provider would be required to submit the evaluation to the FTC no later than 30 days after completion of the evaluation and to make it available to Congress upon request.

What is an “algorithm impact assessment”? For large data holders who use algorithms that may cause potential harm to an individual, and that use such algorithms to collect, process, or transfer covered data, an algorithm impact assessment is also required. The draft bill provides a detailed description of these assessments and requires that they include:

  • A detailed description of the design process and methodologies of the algorithm;
  • A statement of the algorithm’s purpose, its proposed uses, and its foreseeable capabilities outside of the articulated proposed use;
  • A detailed description of the data inputs used by the algorithm, including the specific categories of data that will be processed and any data used to train the underlying model;
  • A description of the outputs produced by the algorithm;
  • An assessment of the necessity and proportionality of the algorithm in relation to its purpose, including the reasons an algorithm is superior to a non-automated decision making process; and
  • A detailed description of steps to mitigate potential harms.

Large data holders would be required to submit the impact assessment to the FTC no later than 30 days after completion of the assessment and continue to produce assessments on an annual basis. As with algorithm design evaluations, the proposed legislation would require the use of an external, independent researcher or auditor to conduct the algorithm impact assessment, to the extent possible.

The level of prescriptive detail may require many companies, and especially large data holders, to dedicate significant resources to assessing their algorithmic tools during the development phase and additional resources to monitoring those same tools during and after development.

Which “potential harms” require an algorithm impact assessment? The following potential harms are expressly highlighted in the text of the bill, suggesting that these are areas of focus for lawmakers:

  1.  Potential harms related to individuals under the age of 17;
  2. Potential harms related to advertising for, access to, or restrictions on the use of housing, education, employment, healthcare, insurance, or credit opportunities;
  3. Potential harms related to determining access to, or restrictions on the use of, any place of public accommodation, particularly as such harms relate to protected characteristics, including race, color, religion, national origin, sex, or disability; and
  4. Potential harms related to disparate impact on the basis of individuals’ race, color, religion, national origin, sex, or disability status.

The language of the proposed bill suggests that this list of potential harms is not exhaustive. It is also worth noting that the bill is under consideration at a time when there is significant regulatory attention on ad targeting and digital marketing, including by the Consumer Financial Protection Bureau, which recently issued an interpretive rule on digital marketing and expressed concern over discriminatory conduct online and “digital redlining.”3

What does it mean to “discriminate” under Section 207? One of the key questions raised by the proposed legislation, and one that would be critical to assessing compliance, is what exactly does it mean to “discriminate” under Section 207 of the ADPPA? While Section 207’s reporting requirements involve descriptions of any “disparate impact” resulting from the deployment of an algorithm in a covered entity’s business practices, it is unclear what legal standards would be used in assessing discrimination or disparate impact under the proposed legislation and what type of business justification might suffice to satisfy the proposed bill’s requirements. Depending on the algorithm, it may be very difficult—if not impossible—to completely eliminate all disparate impact against any protected classes, even when using objective and facially non-discriminatory criteria. In addition, the proposed legislation refers to “protected characteristics,” but this term is not defined, nor does the proposed legislation reference any federal or state anti-discrimination laws that explicitly enunciate the so-called “prohibited bases” that such laws are designed to protect. Moreover, the proposed bill does not address how companies are expected to perform testing in the absence of demographic data such as race or national origin and whether proxying methodologies (such as the Bayesian Improved Surname Geocoding—or “BISG”) would be required.

Enforcement

The ADPPA would create a Bureau of Privacy at the FTC to enforce its provisions, and any ADPPA violation would be treated as a violation of a rule defining an unfair or deceptive act or practice (“UDAP”) under section 18(a)(1)(B) of the Federal Trade Commission Act (15 U.S.C. 57a(a)(1)(B)).

With respect to Section 207, the ADPPA would authorize the FTC to promulgate regulations to establish processes by which large data holders can submit impact assessments and exclude from assessment “any algorithm that presents low or minimal risk for potential for harms to individuals.” The ADPPA would also require the FTC to publish guidance within two years of the bill’s enactment regarding compliance with Section 207 and a study within three years of the best practices for assessment and evaluation of algorithms and methods to reduce the risk of harm. These publications may help provide guidance to companies as they navigate compliance and dedicate resources to the evaluation of algorithmic tools.

Although the ADPPA as drafted includes a private right of action about which a number of business groups have raised concerns, it, importantly, would not apply to Section 207’s provisions related to potential discrimination. Instead, the FTC and state attorneys general would be empowered with enforcement authority with respect to Section 207.

What’s Next?

Despite the bipartisan support, the bill has faced significant resistance from California lawmakers who argue that the bill would preempt the California Privacy Rights Act (“CPRA”), which they argue offers stronger protections to California residents (though a number of experts, including a former Chairman of the FTC, have questioned whether the CPRA actually provides stronger protections). Several state attorneys general have also sent a joint letter to Congress expressing the urgent need to amend the bill to explicitly allow states to pass potentially more expansive privacy, data, and artificial intelligence-related requirements in the future as technology and online practices evolve. Conversely, a number of business groups have expressed concerns that the bill does not effectively preempt state laws, leaving in place, at least to a certain extent, a patchwork of privacy laws across the United States.

Even if its enactment is unclear, the ADPPA provides significant insights as to the type of oversight of AI tools that lawmakers and regulators may seek to exercise in the near future. It is an issue that is likely to receive continued focus by the federal government, as demonstrated by the White House Office of Science & Technology Policy’s recent unveiling of a Blueprint for an AI Bill of Rights.

Companies may wish to consider developing internal impact assessment forms for design teams to fill out during the development phase of algorithmic products, paying particular attention to data integrity and data inputs; human oversight, monitoring, and control; and, potentially, disparate impact analyses. These impact assessment forms and related processes could be embedded into existing governance protocols, and training could be arranged for relevant stakeholders. Companies may also consider whether their organizations would benefit from the addition of an AI committee or whether existing risk committees or other bodies can expand their remit to assess impacts of algorithmic applications. The teams conducting the impact assessment would benefit from being cross-functional and diverse—design and technology experts, risk and/or compliance strategists, marketing professionals, ethicists, and lawyers can all be important advisors during this process.

 


 

1 See American Data Privacy and Protection Act, H.R. 8152, 117th Cong., https://www.congress.gov/bill/117th-congress/house-bill/8152/text#toc-H4B489C75371741CBAA5F38622BF082DE; American Data Privacy and Protection Act Draft Legislation Section by Section Summary (2022), S. Comm. on Commerce, Science, and Transportation, https://www.commerce.senate.gov/services/files/9BA7EF5C-7554-4DF2-AD05-AD940E2B3E50.

2 See Blueprint For An AI Bill Of Rights: Making Automated Systems Work For The American People, White House Office of Science & Technology Policy, https://www.whitehouse.gov/ostp/.

3 See US CFPB Takes Aim at Digital Marketing Providers with New Interpretative Rule (https://www.mayerbrown.com/en/perspectives-events/publications/2022/08/us-cfpb-takes-aim-at-digital-marketing-providers-with-new-interpretative-rule).

Related Services & Industries

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe