Do We Need Robot Law?
6 min to read

Do We Need Robot Law?

Date
14 February 2017

Roger Bickerstaff recently spoke at a joint British Academy/Royal Society event looking at this topic from the perspective of law, computer science and psychology – see http://www.britac.ac.uk/events/do-we-need-robot-law and http://www.prospectmagazine.co.uk/britishacademy/do-we-need-robot-laws

Here are his notes for the event:

The quick answer to the question of whether we need robot law is that at this stage in the development of AI there is no need for a general robo law along the lines of Isaac Asimov’s three or four laws of robotics.

That being said there are lots of areas where existing law needs to be changed and new laws need to be introduced in order to deal with robotics and artificial intelligence.

In order to classify the changes, three types of change to the legal environment can be identified:

  • Facilitative changes – these are changes to law that are needed to enable the use of AI
  • Controlling changes – these are changes to law and new laws that may be needed to manage the introduction and scope of operation of robotics and artificial intelligence
  • Speculative changes – these are the changes to the legal environment that may be needed as robotics and AI start to approach the same level of capacity and capability as human intelligence – what is often referred to as the singularity point.

Facilitative changes to the law: Looking first at the facilitative changes. Some existing laws are restraining the introduction of AI – probably fewer than might be expected. These laws need to be modified if we want to introduce AI. Examples include;

  • Intellectual Property and, in particular, copyright – there may be a need for a text and data mining exception in order for AI systems to interrogate data sets which include copyright materials. The interrogation of data sets by AI systems may be a copyright infringement and this needs to be sorted out.
  • Transport law – transport is a highly regulated environment. There are lots of examples where the law needs to be changed in order to allow for autonomous vehicles, planes, trains, ships etc.  In the UK the Centre for Connected & Autonomous Vehicles has recently run a consultation process on the approach to regulatory reform to enable autonomous road vehicles. The government has confirmed that it will continue a rolling programme of regulatory reform so that broadly speaking the regulatory environment is in place in time to meet technology developments
  • Workplace Law – is an area that we’ve been discussing recently at Bird & Bird – as AI increasingly is used in the workplace concepts need to be clarified on issues such as robotic workplace harassment: can an employer be liable for harassment of an employee by an AI system?

For Tech lawyers these are interesting and important issues but – in general terms – they are conventional Tech law. The law has always needed to evolve to cope with technological innovation. Sorting out these types of legal issues is the type of work that Tech lawyers do on a daily basis.

Controls on the introduction of AI and the scope of usage of AI: this is the second category where legal changes that may be required. Perhaps we need to think about the ways in which legal protections can be introduce to ameliorate the economic consequences of widespread AI adoption. It does seem that widespread disruption frequently tends to be for the economic benefit of a very small minority. It can take many years for more widespread benefits to spread around society in general.

There are lots of statistics and discussions on the overall economic benefits and risks associated with the widespread adoption of AI systems. My take is that AI is disruptive and that technology that is seriously disruptive generally has a disruptive impact on society. It is said that the adoption of autonomous vehicles in the US may result in the loss of at least 3m driving jobs. This would be very disruptive – on a scale way beyond the current levels of voter dissatisfaction in the US. This is risky stuff to play around with.

We can take steps through changing the legal framework to ensure that increasing inequality is not an effect of the widespread adoption of AI. We need think about the modern day equivalents of the 19th Century Factories Acts. These Acts protected factory workers from being forced to work excessively long hours. There are parallels now with the “gig” economy and the tendency for disruptive Tech companies to expect workers to work on a self-employed basis rather than as employees.

Neil Brown suggests that human impact assessments should be a requirement for the introduction of AI systems. Before an AI system is introduced a human impact assessment should be carried out to assess whether or not the introduction of the system will be beneficial or not.  This is certainly not the complete solution but it is an interesting concept.

Legal Controls on the Speculative Consequences of AI: This is the really scary stuff. Stephen Hawking has commented, “the development of full artificial intelligence could spell the end of the human race”. He was talking in the context of AI weaponry but there are serious concerns about the unforeseen consequences of singularity – the point when AI becomes just as intelligent (if not more intelligent) than human intelligence.

No one knows when this will occur – sometime over the next 100 years seems to be the best estimate. (I don’t think you need to be too much of an expert to make that prediction). And no one knows what the consequences will be when singularity occurs.

I’m interested in the approach of the Machine Intelligence Research Institute. Building on the ideas of Eliezar Yudkowsky and Nick Bostrum, MIRI is trying to determine the mathematics that underpin human intelligence in order to develop tools that will enable the development of beneficial general AI.The idea of trying to discover the mathematics that underpin human intelligence is a somewhat baffling and unlikely prospect. But when Newton said that he could use mathematics to predict celestial and terrestrial motions most people found the concept equally baffling. We shouldn’t dismiss the idea altogether.

Whilst there is no need for a general law of robotics at this stage, the principles that underpin Asimov’s ideas are worth thinking about. An expert group from the Engineering and Physical Science Research Council has taken these thoughts forward and has identified 5 principles of robotics, including the principles that humans not robots are responsible agents and the person with legal responsibility for a robot should always be attributed.

These are useful principles. They are a sensible basis for the control and management of the legal risks associated with AI systems. They are not law and there may be no need for them to become law but they are a useful guidance for the developments of laws associated with the implementation of widespread AI.

Roger Bickerstaff

 

 

 

Share
Written by
Roger Bickerstaff
Roger Bickerstaff
United Kingdom
Roger is a partner at Bird & Bird LLP in London and San Francisco and Honorary Professor in Law at Nottingham University. Bird & Bird LLP is an international law firm specializing in Tech and digital transformation.
View profile
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.