More than a thousand researchers, AI experts, and high-profile business leaders say war is getting out of hand and we should ban “offensive autonomous weapons,” lest the world powers wind up in a “military artificial intelligence arms race.” They would ban AI development for warfare and autonomous weapons that decide who, what, where, and when to fire. They’d draw the line so as to allow remotely operated devices under human control, however, such as drones are now.

The signatories include Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, professor Stephen Hawking, Google DeepMind CEO Demis Hassabis, and about 1,000 others. The letter will be presented at the International Joint Conference on Artificial Intelligence Wednesday in Buenos Aires, according to the Guardian, which first reported the story.

AI as the third deadly revolution in warfare

According to the letter, “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

On the one hand, they say, artificial intelligence makes the battlefield safer. On the other, it lowers the risk of going to war, especially for the side that strikes first or has more and better AI weaponry.

Beyond gunpowder and nukes, there have been other big leaps in technology that gave one side an advantage: the machine gun (Gatling Gun) of 1862, poison gas and tanks in World War I, massive aerial bombardment of cities in the 1930s (taking war beyond the front line and to the civilian population), and potentially biological agents. Ironically, Richard Gatling, inventor of the eponymous weapon, was quoted as believing its efficiency would reduce the size of armies and thus the total amount of deaths and suffering. The only way it reduced the size of armies was after a battalion charged the guns.

Some new weapons have been banned or sidelined. Since 1995, blinding lasers have been outlawed.

Differences among the signers

There is general agreement that an AI/robotic arms race is bad, especially since they make their own decisions, which could lead to the escalation of fighting since both sides can toss more materiel at each other. There are also differences: Hawking and Musk have said, “[AI] biggest existential threat …. [full AI might] spell the end of the human race.” Wozniak on the other hand makes an orthogonal point: Robots can be good for people. They might become akin to the “family pet … taken care of all the time.” If so, Sony needs to bring back Aibo quick.

Generally, the 1,000-plus signatories appear to see a difference between hands-off autonomous weaponry using AI decision-making, and devices such as drones that operate without humans aboard, but controlled from afar (sometimes back in heartland America) by human operators who decide when to push the button.

Already considered by the UN

In April, a United Nations conference meeting in Geneva discussed futuristic weapons, including killer robots. Some world powerhouses were opposed to limits or bans. The UK, for instance, was in opposition because it wasn’t necessary. According to The Guardian, the UK Foreign Office said, “[We] we do not see the need for a prohibition on the use of laws, as international humanitarian law already provides sufficient regulation for this area.”

Right now, advantage accrues to the major powers with big budgets. Over time, smaller countries or rogues-without-states could buy or adapt robots and AI to their own purposes. Unlike work on nukes or chemical weapons, it might be easier to mask their work into AI warfare.