Elon Musk and other tech leaders have pledged never to build killer robots, possibly halting the chances of a future a Skynet-type disaster. This is the first time AI bigwigs have pledged not to develop lethal autonomous weapons.

Musk, the three co-founders of Google’s AI subsidiary DeepMind, Skype founder Jaan Tallinn, and other giants of the tech industry signed the pledge, published Wednesday at the 2018 International Joint Conference on Artificial Intelligence in Stockholm. The agreement was coordinated by the Future of Life Institute (FLI), a Boston-based organization that supports research and initiatives to safeguard life, particularly the risk faced by advanced artificial intelligence.

Read more

The letter warns that, with AI “poised to play an increasing role in military systems,” citizens and lawmakers urgently need to “distinguish between acceptable and unacceptable uses of AI.”

“Lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual,” it reads. “Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression.

“In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.”

Last year, Musk and 115 other scientists signed another agreement put together by FLI, calling on the United Nations to regulate AI technology.

Some 26 countries in the UN have explicitly endorsed the call for a ban on lethal autonomous weapon systems, including Austria and China. Notable Western countries such as the US, UK, Canada, and Australia were absent from the list of those calling for a killer robot ban.

Think your friends would be interested? Share this story!