And they could select targets 'without meaningful human control'

If a robot unlawfully kills someone in the heat of battle, who is liable for the death?

In a report by the Human Rights Watch, they’ve highlighted the rather disturbing answer: no one.

The organisation says that something must be done about this lack of accountability - and it is calling for a ban on the development and use of ‘killer robots’.

The organisation will present their report at the UN in Geneva next week. They say that we must consider banning 'fully autonomous weapons' (image from the movie Terminator shown). No accountability is in place for unlawful killings by a robot. And they could select targets 'without meaningful human control'

The HRW will present its report at a major international meeting on ‘lethal autonomous weapons systems’ at the UN in Geneva from 13 to 17 April.

Called ‘Mind the Gap: The Lack of Accountability for Killer Robots,’ it details the hurdles of allowing robots to kill without being controlled by humans.

‘No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party,’ said Bonnie Docherty, senior Arms Division researcher at the HRW and the report’s lead author.

WILL ROBOTS TURN US INTO PETS? Robots will use humans as pets once they achieve a subset of artificial intelligence known as 'superintelligence'. This is according to SpaceX-founder Elon Musk who claims that when computers become smarter than people, they will treat them like 'pet Labradors'. His comments were made in an interview with scientist Neil deGrasse Tyson last month, who added that computers could choose to breed docile humans and eradicate the violent ones. Advertisement

‘The many obstacles to justice for potential victims show why we urgently need to ban fully autonomous weapons.’

The organisation says that fully autonomous weapons are dangerous as they can select and engage targets ‘without meaningful human control.’

They note that the robots do not exist yet, but the ‘rapid movement of technology in that direction,’ is a cause for concern.

For example, if a robot were to kill a civilian by mistake during a conflict, no one would be held accountable.

‘Programmers, manufacturers, and military personnel could all escape liability for unlawful deaths and injuries caused by fully autonomous weapons,’ the HRW said.

Some robots are already used on the battlefield, such as bomb disposal robots (artist's illustration shown), but these are controlled by humans. The HRW warns that rapid advances in technology will allow robots to become autonomous and carry out their own actions

Pictured is the LS3, dubbed AlphaDog, a four-legged, autonomous robot that can follow a soldier, and is designed to carry heavy equipment. This is one example of how robots are already being designed to help soldiers on the battlefield

If a military commander or operator intentionally deployed a robot to commit a crime, they would of course be held accountable, said the HRW.

‘But they would be likely to elude justice in the more common situation in which they could not foresee an autonomous robot’s unlawful attack and/or were unable to stop it,’ they added.

‘A fully autonomous weapon could commit acts that would rise to the level of war crimes if a person carried them out, but victims would see no one punished for these crimes,’ said Docherty.

‘Calling such acts an “accident” or “glitch” would trivialise the deadly harm they could cause.’

The organisation has also questioned whether it is ethical to allow machines to allow life and death decisions on the battlefield in the first place.