octubre 15 2025

Artificial Intelligence Provisions in Technology Contracting: Keeping Up with the Evolving Regulatory Landscape

Share

With the blistering pace of AI development and deployment, regulation has struggled to keep pace. As legislators and regulators grapple with the risks involved in the use of AI and how best to control those risks (if at all), businesses should proactively address these risks and regulations in their AI-related contracts, whether as a customer procuring AI tools, or as a service provider seeking to use AI to better serve its customers. Both providers and customers should review and update their contracts as necessary to remain in compliance as the law and technology rapidly changes.

United States: Trends Across States

Much like privacy law, there is no comprehensive AI regulation at the federal level in the United States. Instead, there is a developing patchwork of state laws. A number of states have already passed significant AI legislation, including: (1) Colorado, which has a comprehensive AI law that is intended to safeguard Colorado residents from algorithmic discrimination1 and an AI insurance regulation that establishes a risk management framework;2 (2) California, Utah, and New Jersey, which have AI laws that require businesses to disclose to users (i) that they are interacting with AI, (ii) whether content is AI-generated, and (iii) information about the training data used to develop AI;3 (3) New York and Illinois, which have human resources AI laws to protect employees and job applicants from AI-related discrimination and provide transparency when AI is used for employment decisions;4 and (5) Texas, which has an AI law focused on preventing developers and deployers from engaging in prohibited practices (e.g., manipulating human behavior, infringing the US Constitution, engaging in intentional discrimination, and developing certain sexually explicit content and child pornography).5 This emerging statutory framework adds to laws of general applicability that may bear on the development or use of AI (e.g., IP, data privacy, employment, product liability, consumer protection statutes, etc.). When considering these AI laws together, their requirements can be harmonized and translated to contractual provisions intended to protect both developers and deployers of AI systems.6

Notably, the most comprehensive US AI law, the Colorado Anti-Discrimination in AI Law (Colorado ADAI Law), imposes different obligations on developers (the entity that makes the AI system) and deployers (parties that use an AI system). In particular, developers and deployers using AI in high-risk use cases are subject to higher standards. High-risk areas include consequential decisions in education, employment, financial or lending services, essential government services, healthcare, housing, insurance, and legal services.7 The other AI state laws mentioned above also apply depending on whether the business is a developer or deployer of an AI system.

Putting these requirements together when contracting for AI systems, a broad mutual compliance with laws obligation may suffice for low-risk use cases. However, particularly for high-risk AI use cases, it is important to include clear contractual requirements regarding developer and deployer obligations in the AI value chain. In particular, a deployer may want the developer to warrant that it developed the AI system in a responsible manner, including through appropriate data governance, risk mitigation measures (e.g., NIST AI RMF or ISO/IEC 42001 standard), documentation and instructions for use, transparency notices (e.g., latent and manifest disclosures in AI-generated content), cybersecurity, and algorithmic discrimination, accuracy, and robustness testing. Likewise, the developer would likely expect corresponding commitments from the deployer to use the AI system responsibly, implement an AI risk management framework (e.g., NIST AI RMF or ISO/IEC 42001 standard), communicate AI pre-use notices to its end users, handle AI-related rights, prepare required AI impact assessments, and follow the developer’s instructions for use.

Specifically, most of the state AI laws are concerned with bias and discrimination as a result of algorithmic decision making. These laws require developers and deployers to take steps to prevent discrimination in high-risk AI systems. To this end, a deployer might seek a commitment from a developer to regularly audit the AI system for potential bias and address any issues found, including providing information to the deployer about how decisions are made by the AI. Conversely, the developer may require the deployer to use sufficient human oversight and procedures to identify discriminatory impacts, as well as a requirement to notify the developer if the deployer identifies any bias in the AI system so that the developer can take corrective action.
Another key principle that is emerging in US AI law is transparency. Developers may have obligations to provide deployers with information about how their AI systems work and are developed, while deployers may have obligations to provide notice to their users and customers that AI is in use. In particular, developers providing generative AI services to the public in California will need to provide information about the underlying datasets used to train the models, as well as clear disclosures when content is AI-generated.8 Likewise, deployers (and developers) who use AI to interact with consumers may need to clearly disclose that the consumer is interacting with AI, which can be addressed through notices near the AI system prompt box and programmed in the AI system (e.g., for queries regarding whether the system is a human to say that it is a chatbot). Parties may wish to delineate in the contract who will be responsible for these various transparency obligations.

One final point the parties will want to keep in mind is the extent to which these laws do (and should) apply to them. If the developer does not intend for the AI system to be used in a high-risk use case (thus avoiding the most stringent requirements), the contract should include commitments from the deployer not to use the AI system for such use cases. Likewise, the contract should include mutual covenants of assistance (and possibly indemnity) if a party’s actions bring the other party into more stringent (or different, if a party shifts from deployer to developer or vice-versa) regulations.

European Union: Comprehensive Framework Under the EU Artificial Intelligence Act

Scoping the applicable regulatory obligations on each party

Under the EU Artificial Intelligence Act (EU AI Act), the regulatory obligations placed upon a company will depend on its role with respect to the AI system, the nature of the AI system, and the risk level associated with it.9 Companies should consider the following questions to determine what regulatory obligations they and their counterparties will bear in connection with an agreement concerning the deployment or development of AI systems, so as to ensure that these obligations are appropriately contracted for when negotiating terms:

What is the role of each party under the agreement?

The provider is the entity who developed the AI model or system, or had it developed, and places it on the market under its own name or trademark. The deployer is the entity under whose authority the AI system is used (e.g., by its employees).

In certain scenarios, a party may act as both provider and deployer. For example, a supplier of an AI system (a provider under the EU AI Act) may offer services that incorporate the use of third-party AI tools (and would then be a deployer of those tools). A deployer may also become a new provider of an AI system (for example, as a result of their modifications to an AI system supplied by a third party).

Under the EU AI Act, providers are subject to a greater level of obligations than deployers. Parties may also be subject to separate obligations in their capacity as an authorized representative, importer, distributor or product manufacturer of AI systems.

What types of AI systems does the agreement relate to, and what risk level is associated with it?

The level of obligations imposed on a party will depend on the type of AI system and level of risk associated with it:

  • Prohibited AI practices: These are expressly prohibited under the EU AI Act (e.g., social scoring or inferring emotions in the workplace);
  • General-purpose AI models: These are subject to additional obligations under the EU AI Act and additional regulatory oversight;
  • High-risk AI systems: High risk AI systems are systems which pose a significant risk of harm to the health, safety or fundamental rights of natural persons–as a result, they are subject to the strictest obligations under the EU AI Act;
  • Limited risk AI systems: Subject to transparency requirements only; and
  • Minimal or no risk AI systems: No additional obligations.

Depending on the role of the organization and the risk level of the AI model or system, different obligations apply. For this reason, it is essential to determine, on a case-by-case basis, the obligations of the parties, and reflect them in the agreement. As part of any scoping exercise, companies should also consider the following issues for drafting the agreement:

  • High risk practices: Providers and deployers should assess whether an AI system will be used for high-risk uses. If so, each party should seek to include contractual obligations to ensure that the other party meets their regulatory obligations as a provider or deployer of high-risk AI systems (as explained below). If not, the provider should seek to specify in the agreement that the AI system will not be used by the deployer for high-risk uses.
  • Changes of party roles: When a deployer of an AI system becomes a new provider as a result of their modifications to an AI system, the provider of the original AI system will be subject to additional cooperation obligations under the EU AI Act to ensure the new provider’s compliance with its additional obligations as a provider of an AI system. The original provider of the AI system may seek to avoid these obligations by including an express prohibition in the contract of any modifications to or modified uses of the original AI system by the deployer (or otherwise seek to clarify the level of cooperation to be provided to the deployer and to limit its contractual liability, for example by way of an indemnity for any losses arising from the deployer’s modifications to or uses of the AI system). Equally, if a deployer intends to make modifications to a provider’s AI system, the deployer should seek to include contractual obligations on the provider to specify their cooperation obligations.

AI Transactions in China

China has been at the forefront of AI regulation, as it has introduced various laws and guidelines on the use of AI, deep synthesis, and algorithm management. We set out below some of the key requirements that AI service providers and users should be aware of when preparing and entering into a technology contract that targets the China market or otherwise has a China nexus.

Security Assessment and Filing with the CAC

AI service providers are required to conduct security assessments and file copies of their algorithms with the Cyberspace Administration of China (CAC) if it is deemed that their services can influence public opinion or cause social mobilization. Any agreement with a Peoples’ Republic of China (PRC) service provider that may include services being provided in China should contain, at a minimum, warranties from the PRC service provider that their services comply with the relevant regulations and have passed the security assessment and filing requirement, if applicable.

The customer should also seek an indemnity from the service provider for any losses or liabilities arising from non-compliance with any of the PRC AI laws and regulations.

Training Data

Parties may add provisions that regulate the use of training data and foundation models by AI service providers. Under current regulations, AI service providers must ensure the legitimacy, quality, accuracy, and diversity of the data used for training their AI models. In addition, the training data and foundation models must be obtained from lawful sources and comply with intellectual property laws and personal information protection laws. AI service providers are also prohibited from collecting excessive or unnecessary personal information or unlawfully retaining personal information as input data.

Content Management

Another important obligation for AI service providers relates to content management. If the service provider identifies any illegal or harmful content, such as content that threatens national security, public order, or social morality, it must promptly take action to stop producing and transmitting, and to remove, such content. AI service providers should also rectify any deficiency in the AI model and report such unlawful content to the relevant authorities. It is likely that PRC AI service providers will insist on provisions in the agreement to ensure the lawful and ethical use of AI services by users, including not to generate or transmit any illegal or harmful content using the AI services. Meanwhile, AI service providers should also be required to put in place a set of content management measures, and to set up a mechanism for receiving and handling users’ complaints.

Labelling of AI-Generated Content

China requires AI-generated content (AIGC) to be labelled to enhance transparency and prevent fraud and misinformation arising from the misuse of AI. AIGC should carry explicit labels that are easily perceived by users, and implicit labels in the metadata. Service providers are required to specify in the user agreement how AIGC should be labelled, and remind users to carefully read and understand the relevant labelling requirements. When a user asks a service provider to provide unlabelled AIGC, the service provider should specify in the user agreement the obligations regarding AIGC labelling and use caution before it provides unlabelled AIGC to the user. Service providers providing unlabelled AIGC should also keep records of relevant users’ information and logs for a period not less than six months.

 


 

1 See Colo. Rev. Stat. Ann. § 6-1-1701.

2 See 3 Colo. Code Regs. § 702-10:10-1-1.

3 See Cal. Bus. & Prof. Code § 17940; Cal. Bus. & Prof. Code § 22757; Cal. Civ. Code § 3110; Utah Code Ann. § 13-2-12; N.J. Stat. Ann. § 56:18-1.

4 See New York City, N.Y., Code § 20-870; New York City, N.Y., Rules, Tit. 6, § 5-300; 775 Ill. Comp. Stat. Ann. 5/2-101; 820 Ill. Comp. Stat. Ann. 42/1.

5 For more details, see Mayer Brown’s Legal Updates “Texas Passes Unique Artificial Intelligence Law Focused on Prohibited Practices.”

6 Likewise, this text only addresses regulatory requirements in contracts; parties may wish to address other aspects of AI (such as IP rights in input and output) elsewhere in the contract. For more information, see “Partnering with Cloud Providers for AI Solutions” in Partnering For Innovation in a Changing World: Legal Perspectives from Mayer Brown.

7 See Colo. Rev. Stat. Ann. § 6-1-1701(3).

8 For more details, see Mayer Brown’s Legal Updates “California Passes New Generative Artificial Intelligence Law Requiring Disclosure of Training Data” and “New California Law Will Require AI Transparency and Disclosure Measures.”

9 For a deeper dive into the applicability of and requirements under the EU Artificial Intelligence Act, see “The Impact of the EU AI Act on AI Reseller Deals” in Partnering For Innovation in a Changing World: Legal Perspectives from Mayer Brown.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe