When robots kill

Not long ago I wrote about a worker who was “attacked” by an industrial robot. In the aftermath the role of the courts was to attempt to decide who was responsible for the industrial accident. But what will happens when robots become more autonomous.

The Royal Academy of Engineering has published a report on the social, legal and ethical issues surrounding autonomous systems. As one of the contributors, lawyer and visiting professor at Imperial College London, Chris Elliot says to the Guardian:

If you take an autonomous system and one day it does something wrong and it kills somebody, who is responsible? Is it the guy who designed it? What’s actually out in the field isn’t what he designed because it has learned throughout its life. Is it the person who trained it?

These are very cool questions which need to be discussed now as we stand on the eve of autonomous systems. Read the report here.

Naturally the whole autonomous systems brings to mind the whole Skynet (from Terminator) plot. From Wikipedia:

In the Terminator storyline, Skynet gains sentience shortly after it is placed in control of all of the U.S. military’s weaponry. When they realize that it has become self-aware, and what the computer control is capable of, the human operators try to shut the system down. It retaliates and believes humans are a threat to its existence, it then employs humankind’s own weapons of mass destruction in a campaign to exterminate the human race.

But if that happened I doubt that legal responsibility will be the most important thing to discuss…

Humpty-dumpty and irreversable systems

While reading a bit of retro work I came across this:

A little known law of life is that of irreversibility. No human or physical act or process can be reversed so that objects and states end up as they were. During the original act and in the time just after it, both object and state undergo change that is irreversible. An early known poem, Humpty-dumpty, recognises this. Once the egg is broken, that is that.

It is the same with systems. They can never be reversed. They can be changed, certainly, and sidetracked, and they can be very easily destroyed, the moment a human-machine information system comes into being, it takes on a life of its own independent of its creators. The operators just run it, while programmers merely maintain it. The process called entropy begins, a confusion that can be measured by the growing gulf between what people first knew about the system and now know about it.

Brian Rothery (1971), The Myth of the Computer, Business Books, p 43.