AI: Navigating Legal Challenges | Contractual Risk

Share

AI's prominence and the role that it plays in both everyday society and the commercial world is ever-increasing.  Whilst the use of AI can result in significant benefits for businesses and their customers, there are also additional commercial and legal risks that can arise through the use of AI.

Some of those risks are consistent with pre-existing legal issues that businesses commonly face (particularly when using other forms of technology), however many of the risks arising from the use of AI are novel to AI. Businesses and their in-house legal teams must carefully consider these risks as they begin to adopt new technologies.

This article is the first instalment of our 'AI: Navigating the Legal Challenges' series, in which we explore the legal challenges faced by businesses adopting AI and consider what can be done to navigate and protect against those risks.

In this first article, we consider the position from a contractual risk perspective.

Contractual risk

The integration of AI systems into business operations is rapidly transforming how businesses perform their contractual obligations.

This technological advancement is, however, not without its challenges.  In particular, some businesses that adopt AI do not have the correct in-house expertise to understand fully how the technology that they are licensing and using operates.  As a result, these businesses may not be in a position to make a comprehensive assessment on the capabilities and limitations of the technology, potentially introducing “unknown unknowns” in the form of complex commercial and legal risks.

Challenges of AI in Contractual Obligations
  1. Opacity and complexity of AI systems: AI software is inherently complex and often operates as a "black box." Put simply, "black box" is a term used to describe the inability for humans to see how deep learning models (such as LLMs) make their decisions. This lack of transparency can obscure the root cause of any issues that arise with the AI system, complicating the process of attempting to determine whether the AI system has performed as expected under the contract and based on the data with which it has been provided/trained on, or whether something more fundamental has gone wrong along the way.
  2. Autonomy and reduced human involvement: Modern AI systems are increasingly autonomous, relying less on human intervention. This shift raises significant legal questions about responsibility and accountability. When an AI system makes a poor/erroneous decision, exhibits bias, or "hallucinates" (i.e. produces incorrect or nonsensical outputs), it is challenging to ascertain whether the fault lies with the developer or the user. Such decisions can have significant and long-term commercial, legal and reputational ramifications, particularly where (for example) they result in a business behaving in a discriminatory manner or advice being given that is fundamentally flawed. A recent example of this arose earlier this year when an airline was held liable in the Canadian courts for a negligent misrepresentation made to a customer by one of its AI-powered chatbots,1 and another case in 2019 from the Singapore courts which made a distinction between deterministic computers (where a platform produces the exact same output when provided with the same input) and artificial intelligence (which could "be said to have a mind of its own") in the context of the law relating to mistake.2
  3. Attribution of responsibility: The autonomous nature of AI systems means that when failures occur, it is harder to pinpoint whether the issue stems from a technical malfunction or human error. This ambiguity complicates the process of assigning liability between parties, especially in the absence of clear legal frameworks. It is particularly important during the negotiation of the contract for the AI technology to clarify whether the developer or user is responsible for adequate staff training on the AI model, and the extent to which the performance will be monitored (either by the developer or user) for defects and improvements.
Navigating Contractual Risk

To mitigate these risks, businesses must proactively address potential issues through well-drafted contracts, such as by:

  • Implementing bespoke liability frameworks: Contracts should include specific provisions that address the unique risks associated with AI systems. This may involve detailed warranties/indemnities/liability caps that outline the expected performance of the AI, the allocation of risks between parties when the AI system does not perform as expected and limitations on liability for losses that arise from the use of the AI system.
  • Undertaking clear testing and training requirements: Businesses should establish rigorous testing protocols to identify and rectify potential defects in AI systems before deployment. Ongoing quality control of output should be considered a basic risk management measure. Additionally, comprehensive training for users can help prevent AI misuse, ensure that errors/hallucinations are spotted at an early stage, and reduce the likelihood of disputes.
  • Reaching mutual agreement on risk allocation: In the absence of formal AI-specific legislation, parties can achieve contractual clarity by reaching mutual agreement on how risks are to be allocated. This can include specifying the standards of performance and the responsibilities of each party in the event of a system failure.

Conclusion

Implementing AI technology into a business is not without risk and it is important that businesses are alive to and understand the risks that could arise through the use of AI – which they may not have encountered previously.

These risks can, however, be mitigated significantly by ensuring that contracts are drafted clearly and with the specific application in mind. Those responsible for drafting contracts where the subject matter involves AI-technology should ensure that they have a sufficient understanding of the technology and include appropriate safeguards to protect the business from any loss or damage that it may suffer as a result of its use.

In circumstances where an AI-related dispute does arise, it will be important to ensure that the business and the legal team (both in-house and external) develop a comprehensive understanding of (i) the parties' expectations prior to entering into the contract regarding how the AI should have performed; and (ii) how it performed in practice. More generally, parties to AI disputes should ensure that their external legal counsel have a strong understanding of the novel and specific issues that can arise in the context of AI disputes and that technical experts are instructed at an early stage in the dispute process to aid understanding and speed up the process of determining what has gone wrong and where fault is likely to lie.



1 Moffatt v. Air Canada [2024] BCCRT 149.

2 B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(l) 3.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe