Artificial Intelligence applications are appearing everywhere with promises to transform our lives in many ways. But what happens when things go wrong? In this series of two articles, we consider the question of who should be responsible when an AI makes a mistake. We’ll do this by exploring two current or near future examples of the technology: a trading algorithm and an autonomous car.
The examples we’ve taken are not “general” or “strong” AIs (i.e. a computer system that could potentially take on any task it is faced with – a technology which, for the time being at least, remains the stuff of science fiction). Instead we are considering “narrow” or “week” AIs: a computer system which can analyse data in order to take actions that maximise its chance of success at an identified goal.
Our example AI application for this article is a trading algorithm. Our algorithm has a particular goal: to execute trades which seek to maximise return on investment in a fund (perhaps with further parameters set around risk tolerance, desired levels of return, and/or excluding certain classes of assets, to align with the profile of the fund). The algorithm has been developed using machine learning (and continues to use machine learning to refine its decision making process on a day-to-day basis).
Our algorithm has been licensed to a bank by the software house that originally wrote and developed it. Once the licence agreement was signed, the software house set about training the algorithm to achieve its goal by feeding it a diet of labelled data from the software house’s own banks of historic data. After the training and subsequent demonstration phase (in which the software house demonstrated to the bank the decisions the algorithm would have made on live market data) the bank accepts the algorithm and it starts trading the bank’s money.
After a year of successful trading in which the algorithm used the results of its previous decisions to refine its investing strategy and deliver significant returns to the bank, the algorithm makes a catastrophic error: making a very large but seemingly illogical series of trades resulting in a loss of tens of millions of pounds.
The first question here is whether anyone should be liable for this “glitch” at all. Markets can be volatile so liability shouldn’t be attributed to bad investment decisions.
Also the fact the trade seemed illogical in hindsight does not necessarily mean it was definitely wrong. In one of the matches against the Go World Champion, Lee Sedol, in March 2016 Google DeepMind’s AlphaGo AI made such a highly unusual move that commentators thought the AI must have malfunctioned but presumably AlphaGo had its reasons (and went on to win the match and the series).
This leads onto a second problem with ascribing liability for losses caused by AI: proving who is at fault. In the case of AlphaGo, it could not express why it made the move it did or explain what in its experience of playing Go meant it think that move would be a wise one. Likewise in the example of our algorithm, it’s very unlikely that the background to the making of the decision could be unpicked to see what previous experience caused the decision to be made. Without this ability to interrogate the decision, it would not be possible to say if it was an error in the original code written by the software house or resulting from the diet of data it was fed (and in the latter case, whether it arose from the training data or the real “live” decisions made once in use by the bank).
In practice questions of liability are likely to be answered by the parties through the contracts they enter into, much in the same way they are today. It may take some time for a ‘market standard’ approach to emerge but it would seem likely that standard approaches do emerge as the licencing of AI systems in environments where there is a degree of risk becomes more widespread.