A new report states Google has been helping the Pentagon develop artificial intelligence to analyze drone footage and identify objects within it — despite former Google executive chairman Eric Schmidt’s publicly expressed concern over partnering with the government on such issues, and rifts between Google employees over whether the company should have any part of such efforts.

Schmidt had previously been cautious about partnering with the military, saying, “There’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly,” Gizmodo reports. Whatever concerns Schmidt has were clearly not enough to dissuade him from bidding on some of the Department of Defense’s AI spending (the Wall Street Journal reported earlier this week that AI spending at the DoD topped $7.4 billion in 2017).

As far as what the Pentagon wanted, it’s not conceptually much different than what Facebook, Google, Microsoft, and half a hundred other companies want. Users create and upload enormous amounts of data. Corporations want to analyze that data to track and monetize users more efficiently. Since no set of human monitors could possibly track, tag, and analyze landmarks, human faces, and other objects or events caught on-camera as quickly as new data pours in, Silicon Valley is laser-focused on building algorithms that can handle the proverbial firehose. The Pentagon wanted some of that capability for itself and launched a new initiative to develop it, Project Maven, last year.

There are already myriad concerns around the monetization of users, end-user privacy, and the impact of algorithmic bias on an ongoing basis. These issues are greatly magnified when we consider the impact incorrect AI analysis could have in combat or an intelligence-gathering mission. Obviously, human analysis would still be a critical component of any actionable intelligence, but being told that an AI had determined an object resembled X or Y could easily bias the intelligence analyst. As we’ve also discussed, humans tend to treat computers as being more reliable than humans, not less. In some fields, this reliance on a machine’s predictive capabilities is justified. In others, not so much.

Gizmodo notes that despite employee concern, Google’s official message is that this is business as usual. A spokesperson for the company stated:

We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.

The military has reportedly set an aggressive timeline for Project Maven, with solutions up and running within six months of initiation and deployment to the Middle East last December. Maven’s deployment (though not Google’s efforts in developing it) has been discussed by other sources, seemingly confirming this part of the story.

The use of AI in warfare was inevitable, as is the government’s interest in the technology, but even something as benign as object classification could easily be abused — and that assumes the algorithms in question are always right. We can’t speak to whether Project Maven is being used in ways that conform to its capabilities or what the criteria are for evaluating said performance, and therefore can’t draw any conclusions regarding the program itself — but its existence does indeed raise questions about how IP developed for low-stakes customer applications will perform when the stakes are much, much higher.