Last week, CEO Sundar Pichai told employees that the company wanted to develop principles that "stood the test of time," according to those present for his remarks, and Google told the New York Times that those guidelines would prohibit the use of AI in weaponry. How they will do that is currently unclear, but employees said they expected the principles to be announced internally within the next few weeks.

Whatever these guidelines turn out to be, it wouldn't be surprising to see a continued backlash to the company's contract with the Pentagon. Banning collaboration on weaponized AI may not be enough to quell concerns if there's a chance that the "non-offensive" involvement, as Google calls it, could lead to offensive actions, such as drone strikes. "Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public's trust," the Google petition read. "The argument that other firms, like Microsoft and Amazon, are also participating doesn't make this any less risky for Google. Google's unique history, its motto Don't Be Evil, and its direct reach into the lives of billions of users set it apart."

Earlier this month an employee who left Google due to the contract told Gizmodo, "I wasn't happy just voicing my concerns internally. The strongest possible statement I could take against this was to leave."