A global movement is gaining traction in its effort to ban 'killer robots' that are able to target designated enemies on their own

A slew of reports over the past two weeks detailing cases of U.S. armed drones killing civilians signaled a new wave of outrage over the unregulated use of drones by the U.S. There was one report from the U.N., another from Human Rights Watch (HRW) and one from Amnesty International. The uproar — and the sense that Washington has done little to make more transparent its use of drones — culminated in a debate on Friday at the U.N.

But a parallel movement has emerged to make sure that a different and perhaps more terrifying technology never makes it this far.

The Campaign to Stop Killer Robots is a coalition of weapons monitors and human-rights groups leading an effort, formally since April, to establish an international ban on fully autonomous lethal weapons. Dubbed (by opponents) “killer robots,” it’s a technology that can kill targets (humans) without any human input. Whereas drones today have someone somewhere remotely determining where and when to fire, a fully autonomous air, land or sea weapon could be making the decisions on its own.

It sounds like the stuff of sci-fi, but the technology is well within reach given existing weaponry. The U.S. Navy’s X-47B, a Northrop Grumman–developed drone, has taken off and landed on an aircraft carrier — one of the hardest maneuvers in aviation — entirely on its own, and it would only be a short step to add missiles to its weapons bay. In South Korea, a Samsung subsidiary designed — several years ago already — a stationary robot sentry that sits along the demilitarized zone and can identify and fire at a target on its own. It’s linked up with a human operator for now.

Some critics say giving nonhuman technology the ability to decide if a human lives or dies is simply morally reprehensible — on the same level as chemical and nuclear weapons. They also say there are just too many uncertainties in the machines’ circuitry. What if a killer robot malfunctions and begins firing at random? What if it’s hacked? How quickly will the technology proliferate to rival states and nonstate actors like extremist militants? And who exactly is held legally accountable when a killer robot attacks?

The campaign released a statement earlier this month signed by some 272 computer-science experts from 37 countries supporting a ban on development “given the limitations and unknown future risks of autonomous robot weapons technology.”

“We are concerned about the potential of robots to undermine human responsibility in decisions to use force, and to obscure accountability for the consequences,” the statement reads.

Still, the U.S. military is loath to rule out development of a new technology. Last year, days after Human Rights Watch released a report calling for a ban, the Department of Defense issued an ambiguous directive on autonomous weapons that restricts, but does not rule out their use in the field for the time being. It remains the only government policy on the technology, and, unsurprisingly, few countries have formal policies on the issue. But advocates say the weapon could, down the line, in fact become a crucial tool for saving lives.

“While a pre-emptive ban may seem like the safest path, it is unnecessary and dangerous,” wrote law professors Matthew Waxman and Kenneth Anderson, both members of the Hoover Institution Task Force on National Security and Law. “If the goal is to reduce suffering and protect human lives, a ban may be counterproductive. It is quite possible that autonomous machine decisionmaking may, at least in some contexts, reduce risks to civilians by making targeting decisions more precise and firing decisions more controlled.”

Opponents point out that a pre-emptive weapons ban is not unprecedented. In 1995, parties to the U.N.’s Convention on Certain Conventional Weapons (CCW) added a protocol banning blinding lasers. Leading up to that ban, the U.S. was against it before it was for it — after considering the potential for mass proliferation, recalled Stephen Goose, the director of the arms division at HRW. The U.S. change of heart was enough to generate the necessary support.

For now, the U.S. says it doesn’t support an international ban. But representatives at the U.N. met over the potential threat of automated weapons earlier this week, and France, which chairs the next meeting of the CCW in November, has pledged to put the topic of killer robots on the agenda. The U.S. will conspicuously be there.

“It really reinforces that governments and militaries understand that this is something of real concern,” Goose says. “Them sitting down and talking about this is a very good thing.”