Not long ago I wrote about a worker who was “attacked” by an industrial robot. In the aftermath the role of the courts was to attempt to decide who was responsible for the industrial accident. But what will happens when robots become more autonomous.
The Royal Academy of Engineering has published a report on the social, legal and ethical issues surrounding autonomous systems. As one of the contributors, lawyer and visiting professor at Imperial College London, Chris Elliot says to the Guardian:
If you take an autonomous system and one day it does something wrong and it kills somebody, who is responsible? Is it the guy who designed it? What’s actually out in the field isn’t what he designed because it has learned throughout its life. Is it the person who trained it?
These are very cool questions which need to be discussed now as we stand on the eve of autonomous systems. Read the report here.
Naturally the whole autonomous systems brings to mind the whole Skynet (from Terminator) plot. From Wikipedia:
In the Terminator storyline, Skynet gains sentience shortly after it is placed in control of all of the U.S. military’s weaponry. When they realize that it has become self-aware, and what the computer control is capable of, the human operators try to shut the system down. It retaliates and believes humans are a threat to its existence, it then employs humankind’s own weapons of mass destruction in a campaign to exterminate the human race.
But if that happened I doubt that legal responsibility will be the most important thing to discuss…