Artificial intelligence researchers have called on Google to abandon a project developing AI technology for the military, warning that autonomous weapons directly contradict the firm’s famous ‘Don’t Be Evil’ motto.

The experts join more than 3,100 of Google’s own employees, who signed an open letter last month protesting the company’s involvement in a controversial Pentagon program called Project Maven.

The partnership between the technology giant and the US Military involves using customised AI surveillance software to analyse data from drone footage in order to better recognise target objects, such as distinguishing between a human on the ground and a vehicle.

Around a dozen employees have reportedly resigned in protest at Google’s refusal to cut ties with the US military, each one citing ethical concerns to Gizmodo. Google did not respond to a request for comment from The Independent.

In their letter last month to Google CEO Sundar Pichai, the employees wrote: "We believe that Google should not be in the business of war... We cannot outsource the moral responsibility of our technologies to third parties."

The researchers warn that the military could ultimately remove human oversight from drone strikes entirely, if Google’s technology proves effective.

“As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems,” the letter states.

“If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection - no technology has higher stakes - than algorithms meant to target and kill at a distance and without public accountability.”

In pictures: Artificial intelligence through history Show all 7 1 /7 In pictures: Artificial intelligence through history In pictures: Artificial intelligence through history Boston Dynamics Boston Dynamics describes itself as 'building dynamic robots and software for human simulation'. It has created robots for DARPA, the US' military research company In pictures: Artificial intelligence through history Google's self-driving cars Google has been using similar technology to build self-driving cars, and has been pushing for legislation to allow them on the roads In pictures: Artificial intelligence through history DARPA Urban Challenge The DARPA Urban Challenge, set up by the US Department of Defense, challenges driverless cars to navigate a 60 mile course in an urban environment that simulates guerilla warfare In pictures: Artificial intelligence through history Deep Blue beats Kasparov Deep Blue, a computer created by IBM, won a match against world champion Garry Kasparov in 1997. The computer could evaluate 200 million positions per second, and Kasparov accused it of cheating after the match was finished In pictures: Artificial intelligence through history Watson wins Jeopardy Another computer created by IBM, Watson, beat two champions of US TV series Jeopardy at their own game in 2011 In pictures: Artificial intelligence through history Apple's Siri Apple's virtual assistant for iPhone, Siri, uses artificial intelligence technology to anticipate users' needs and give cheeky reactions In pictures: Artificial intelligence through history Kinect Xbox's Kinect uses artificial intelligence to predict where players are likely to go, an track their movement more accurately

Other fears detailed in the letter include the possibility of Google integrating the personal data of its users with military surveillance data for the purpose of targeted killing.

The use of such data would violate the public trust that is fundamental to the operation of Google’s business and would put the lives and human rights of its users in jeopardy, according to the researchers.

"The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users," the letter states.