Killer robots are in the news again: following fears over AI-powered autonomous tanks, the US Department of Defense has issued a statement reassuring people that humans will always be the ones to make the final decision on whether a lethal robot fires at a target.

This clarification comes at time when employees at various technology companies – including Microsoft – have been expressing their concern over how their AI is used. One such individual is Liz O’Sullivan, former employee at Clarifai, an AI company specialising in machine vision tech including facial recognition. Ms O’Sullivan said she left her job after hearing that Clarifai chief executive Matt Zeiler said he was willing to work on autonomous weapons.

“The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill,” said Ms O’Sullivan.

She gave the example of the Israeli Harpy 2 drone or Harop, a fully autonomous weapon currently in use and sold to the governments of South Korea, Turkey, China and India. Dubbed the “suicide drone”, it is capable of seeking out enemy radar signals and blowing up its target without human intervention.

“When presented with the Harop, a lot of people look at it and say, ‘It’s scary, but it’s not genuinely freaking me out.’ But imagine a drone acquiring a target with a technology like face recognition,” Ms O’Sullivan added.