To most of us, autonomous killer robots are the stuff of science fiction. But according to a new report, they could become a reality sooner than we might think—and that is a very bad idea.

In advance of a week-long United Nations meeting about lethal autonomous weapons systems (LAWS), which kicked off today, Human Rights Watch and Harvard Law School released a report that strongly discouraged the development of robots that could kill a target on their own, without any human interaction.

These devices would represent a "step beyond" today's remote-controlled drones, which are unmanned but ultimately controlled by people.

"Fully autonomous weapons do not yet exist, but technology is moving in their direction, and precursors are already in use or development," the report said. "For example, many countries use weapons defense systemssuch as the Israeli Iron Dome and the US Phalanx and C-RAMthat are programmed to respond automatically to threats from incoming munitions."

Anyone who has seen a Terminator movie can probably guess why killer robots are a bad idea. But the report lays it out for anyone still on the fence.

They pointed to proportionality in war, which bans attacks that will cause more civilian causalities than military gain. "According to the US Air Force, 'proportionality in attack is an inherently subjective determination that will be resolved on a case-by-case basis,'" the report said. But "it would be nearly impossible to pre-program a machine to handle the infinite number of scenarios it might face."

"It would be difficult to replicate in machines the judgment that a 'reasonable military commander' exercises to assess proportionality in unforeseen or changing circumstances," the report said.

But what if these machines were deployed and something went wrong. Who is to blame?

"The weapons themselves could not be held accountable for their conduct because they could not act with criminal intent, would fall outside the jurisdiction of international tribunals, and could not be punished," the report found. "Criminal liability would likely apply only in situations where humans specifically intended to use the robots to violate the law."

Meanwhile, good luck trying to sue a Northrup Grumman or Lockheed Martin. "In the United States at least, civil liability would be virtually impossible due to the immunity granted by law to the military and its contractors and the evidentiary obstacles to products liability suits."

Then there's the problem of an arms race. If the U.S. got its hands on one of these devices, other super-powers would likely follow suit.

"Once developed, fully autonomous weapons would likely proliferate to irresponsible states or non-state armed groups, giving them machines that could be programmed to indiscriminately kill their own civilians or enemy populations," the report said. "Some critics also argue that the use of robots could make it easier for political leaders to resort to force because using such robots would lower the risk to their own soldiers; this dynamic would likely shift the burden of armed conflict from combatants to civilians."

In sum, the researchers suggested a ban on the development, production, and use of fully autonomous weapons via laws and policies. So for now, you'll have to make do with on-screen killer robots, which will make their return in the Terminator franchise this summer; the latest trailer is below.

Further Reading