Killer robots Losing control: The dangers of killer robots

By Bonnie Docherty

Published 21 June 2016

New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as “killer robots,” are quickly moving from the realm of science fiction toward reality. While the process of creating international law is notoriously slow, countries can move quickly to address the threats of fully autonomous weapons. They should seize the opportunity presented by the Convention on Conventional Weapons review conference, to be held this December, because the alternative is unacceptable: Allowing technology to outpace diplomacy would produce dire and unparalleled humanitarian consequences.

New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as “killer robots,” are quickly moving from the realm of science fiction toward reality.

These weapons, which could operate on land, in the air or at sea, threaten to revolutionize armed conflict and law enforcement in alarming ways. Proponents say these killer robots are necessary because modern combat moves so quickly, and because having robots do the fighting would keep soldiers and police officers out of harm’s way. But the threats to humanity would outweigh any military or law enforcement benefits.

Removing humans from the targeting decision would create a dangerous world. Machines would make life-and-death determinations outside of human control. The risk of disproportionate harm or erroneous targeting of civilians would increase. No person could be held responsible.

Given the moral, legal and accountability risks of fully autonomous weapons, preempting their development, production and use cannot wait. The best way to handle this threat is an international, legally binding ban on weapons that lack meaningful human control.

Preserving empathy and judgment

At least twenty countries have expressed in UN meetings the belief that humans should dictate the selection and engagement of targets. Many of them have echoed arguments laid out in a new report, of which I was the lead author. The report was released in April by Human Rights Watch and the Harvard Law School International Human Rights Clinic, two organizations that have been campaigning for a ban on fully autonomous weapons.

Retaining human control over weapons is a moral imperative. Because they possess empathy, people can feel the emotional weight of harming another individual. Their respect for human dignity can – and should – serve as a check on killing.

Robots, by contrast, lack real emotions, including compassion. In addition, inanimate machines could not truly understand the value of any human life they chose to take. Allowing them to determine when to use force would undermine human dignity.