A very large number of scientific and technological luminaries have signed an open letter calling for the world's governments to ban the development of "offensive autonomous weapons" to prevent a "military AI arms race."

The letter, which will be presented at the International Joint Conferences on Artificial Intelligence (IJCAI) in Buenos Aires tomorrow, is signed by Stephen Hawking, Elon Musk, Noam Chomsky, the Woz, and dozens of other AI and robotics researchers.

For the most part, the letter is concerned with dumb robots and vehicles being turned into smart autonomous weapons. Cruise missiles and remotely piloted drones are okay, according to the letter, because "humans make all targeting decisions." The development of fully autonomous weapons that can fight and kill without human intervention should be nipped in the bud, however.

Here's one of the main arguments from the letter:

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Later, the letter draws a strong parallel between autonomous weapons and chemical/biological warfare:

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.

The letter is being presented at IJCAI by the Future of Life Institute. It isn't entirely clear who the letter is addressed to, other than the academics and researchers who will be attending the conferences. Perhaps it's just intended to generally raise awareness of the issue, so that we don't turn a blind eye to any autonomous weapons research being carried out by major military powers.

Elon Musk and Stephen Hawking have both previously warned of the dangers of advanced AI. Musk said that AI is “potentially more dangerous than nukes,” while Hawking was far more optimistic, merely saying that AI is "our biggest existential threat."

The main issue with AI in general, and autonomous weapons in specific, is that they are transformational, sea-change technologies. Once we create an advanced AI, or a weapons system that can decide for itself who to attack, there's no turning back. We can't put gunpowder or nuclear weapons back in the bag, and autonomous weaponry would be no different.