In the 2004 film "I, Robot", Hollywood depicted a world set in 2035 where humanoid robots served humanity and driverless cars navigated the streets of a futuristic Chicago. Fast forward 13 years and the Chancellor has pledged in last week's Budget to secure Britain's position as a world leader in technology and innovation. Part of this proposal included pledging £75 million for artificial intelligence (or "AI") and a commitment to make regulatory changes for on-road driverless car testing.

Although driverless cars is, in many respects, the "poster boy" for AI, it will not be too long before AI has a role to play in almost every aspect of how we live.

From a commercial perspective, this may mean that when parties enter into contracts, rather than those contractual obligations being performed by a human they will, in certain circumstances, be performed by (for lack of a more technical term), a robot. This is likely to have legal ramifications in circumstances where the courts may be asked to consider whether parties that have used AI in the performance of their contract have adequately discharged their contractual obligations.

To take a common legal example, section 13 of the Supply of Goods and Services Act 1982 implies a term into contracts (where the supplier is acting in the course of business) that "the supplier will carry out the service with reasonable care and skill". In an AI context, this raises the issue of what a court would take into account when assessing whether or not that standard had been met by the supplier.

Arguably, the supplier could be said to have discharged its obligations by virtue of the fact that it appointed a reputable programmer who, when programming the robot, acted reasonably in the circumstances, with the human skill and input being critical. However, the party contracting to receive those services will want, and is increasingly requiring, that the contract also stipulates the desired output to be achieved by the robot. This will be particularly important as robots develop "machine learning", and change their behaviour of their own volition, in ways that the programmers and the contracting parties cannot predict. Whilst this may seem far fetched, those drafting contracts will need to balance the desire to control performance standards while "future proofing" the contract: as technology moves on, parties receiving AI based services will expect them to be state of the art, and not to receive 2017 technology while their competitors have left that far behind.

Over the last few years we have seen an increase in the number of contracts which concern the use of AI and expect that over the next few years, as the use of AI becomes more widespread, contractual provisions concerning its use will become increasingly detailed and technical. Consequently, it seems likely that any contractual disputes concerning the use of AI are likely to give rise to highly technical questions concerning coding and software development more generally, that the courts will need to consider.

Ultimately, until these issues are litigated and a body of case law is developed in this area it is difficult to say, with any certainty, how this area of the law will evolve. It seems, however, inevitable that as companies begin to change the way they operate so as to include AI in their processes, commercial disputes will begin to arise in respect of how contracts are performed using AI.

If, therefore, the UK is to become a world leader in technology and innovation (as envisaged by the Chancellor), both Parliament and the English courts will need to ensure that they are well positioned to grapple with these issues in order to continue to provide the legal certainty that has become the hallmark of the English legal system over centuries.