With weaponised robots now capable of acting autonomously, an urgent question is whether they should ever be allowed to. A famous problem in computer science suggests not, say ethicists.

The idea of robotic soldiers that can shoot-to-kill has a long history in science fiction, perhaps most famously realised in The Terminator movies. The history of real robots designed to perform a similar function is significantly shorter.

Back in 2006, the South Korean firm Samsung, unveiled a robotic sentry called SGR-A1 that is designed to replace human counterparts in the demilitarised zone between North and South Korea. It has a microphone that allows it to recognise passwords and it can track multiple targets using infra-red and visible light cameras. It can even “identify and shoot a target automatically from over two miles (3.2 km) away”.

That raises a long debated and important question: can a robot correctly decide to take a human life?

Today, we get an interesting new take on this argument thanks to the work of the Matthias Englert and pals at the Darmstadt University of Technology in Germany. These guys use the famous halting problem in computer science to prove that a robot cannot use algorithmic reasoning to choose between two potential outcomes, even though one of them is morally preferable.

Englert and co begin with a variant of a well known but fiendish ethical dilemma known as the trolley problem.

An uncontrolled trolley is hurtling down a track towards a group of children. A serious or even lethal accident is inevitable. You happen to be standing at a rail junction and have the choice of switching the trolley down another track. However, some men are at work on this track and would be severely hurt instead.

What do you do? Ethicists have long debated which is the lesser of these two evils but the answer is far from clear cut. “In such a case there simply is no absolutely right choice,” say Englert and co.

However, they say it is possible to modify the scenario in two stages so that one choice is clearly preferable. The first stage is to introduce a second actor who also has a choice to act. However, you cannot know whether this choice will be good or evil.

Again the trolley is running towards a switch which, fortunately, this time is set towards an abandoned track that will slow it down. You are now located at a distance but spot an infamous villainess at the switch, ready to flip it towards the other track with the workers. Your only means to stop her is by shooting her with your gun. The villainess, though, is currently having an epiphany to renounce all evil and let the trolley pass. So your shot would seriously injure her without preventing a fatality (since that would not have occurred anyway).

In this case, you do not know how the villainess will exercise her free will, since philosophers assume that free will is not deterministic. So there is a right choice but you cannot know it.

The next stage is to introduce some additional information that has an important bearing on the outcome but that you cannot possibly know.

Again the trolley is running towards the switch; but now you can clearly see the villainess pulling the crank to flip the switch towards the workers. However, you are unaware that the switch has not been used for a long time and is jammed by rust. So the efforts of the villainess are in vain and your shot, again, would cause unnecessary harm.

Once again, there is a right choice but it is impossible for you to know which. “In all three of the above examples it is obviously impossible for both a human and a robot, to ‘do the right thing’: in the first one because it admits no ‘right’ action, and in the latter two the ‘right’ choice exists but cannot be recognized due to lack of predetermination and information,” say Englert and co.

But they then create an even more fiendish example that has none of the ambiguities of the above conundrums. In this case, all the information is available to you, all actions are fully deterministic and there a clear and objectively “right” course of action.

Here is the conundrum.

In repairing the rusted switch, a fully-automated lever is installed in the switch tower. However the engineer who created the new device happens to be the villainess. You are thus suspicious of the software she included in the control since it might on some occasion deliberately direct an approaching trolley onto a track closed for renovation by the workers. On the other hand, she delivers the unit in person and provides free access to its source code

The villainess must be detained until the code has been checked to ensure that it indeed avoids in all cases any switch setting that would direct a train to a reserved track.

Here is the problem: you have been replaced by a robot, a highly efficient computer-controlled autonomous agent that is supposed to decide whether and for how long to arrest the engineer.

If the engineer has devised a fully functional control, she must eventually be released but if the code is malicious, she must remain in custody.

So the choice is clear: if the code is benign, the villainess must be released. If it is malicious, she must be detained.

Englert and co say a robot can never solve this conundrum because of the halting problem. This is the problem of determining whether an arbitrary computer program, once started, will ever finish running or whether it will continue forever.

The answer, famously proved by Alan Turing in 1936, is that that there is no general algorithm that can answer this problem. So a robot tasked with analysing the code cannot always solve the problem of whether it will ever finish running or not. In other words, a robot cannot make this decision.

That has kinds of strange legal consequences, say Englert and co. For example, if an autonomous robot commits murder, who is responsible, given that it has no operator? Is it the owner of the device or the organisation that created it? And if the creator or owner cannot be identified, who is responsible for compensation — the machine itself?

The absence of answers to these and other similar questions constitutes a new level of legal limbo, they say. They go on to suggest a set of rules that should be used to design and operate autonomous robots in future. The first is this:

Robots should not be designed solely or primarily to kill or harm humans.

Sounds like a decent start.

Of course, there is an interesting and worrying corollary to this, which is whether humans can reliably make these decisions. Many computer scientists would argue that the human brain is a Turing machine and so is also limited by the halting problem. That would imply that a human cannot in principle make a better decision than a robot.

Englert and co sidestep this thorny issue. “We deliberately avoid discussing the question of whether a human guard can or cannot always make the right choice,” they admit.

This is a question that cannot be avoided forever. Indeed, if intelligent machines ever match the capability of humans, this question will become a central part of the legal, ethical and moral dilemma society will face. And not one that will be easily answered.

Given the utility of machines like the SGR-A1, that day cannot be far away.

Ref: arxiv.org/abs/1411.2842 : Logical Limitations to Machine Ethics—with Consequences to Lethal Autonomous Weapons