Silicon Valley is going to have to work through its qualms about providing AI services to the military, said Alphabet board member and former Google CEO Eric Schmidt.

Testifying before the House Armed Services Committee on Tuesday, Schmidt said that AI would be useful for “defensive and perhaps offensive purposes” in warfare, and developing the technology would have to be done with the help of the private sector. When asked by Rep. Elise Stefanik (R-NY) how this would work considering the “reluctance” of tech companies to work with the Department of Defense, Schmidt suggested that firms will need to agree between themselves on acceptable norms.

“The industry is going to come to some set of agreements on AI principles — what is appropriate use, what is not — and my guess is that there will be some kind of consensus among key industry players on that,” said Schmidt.

Got a tip for us? Use SecureDrop or Signal to securely send messages and files to The Verge without revealing your identity.

Schmidt, who stepped down as Alphabet chairman last December but remains on the company’s board, said he was speaking in a personal capacity, and not as the representative of any company. But the “reluctance” mentioned by Stefanik presumably includes objections by Google employees. After it was revealed in March that Google had been working with the DOD on AI to analyze drone imagery as part of an initiative called Project Maven, more than 3,000 employees signed a letter protesting the firm’s involvement.

“We believe that Google should not be in the business of war,” said the letter, which was addressed Google CEO Sundar Pichai. “We ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

Google has said that the technology it is helping to develop is for “non-offensive uses only” and is intended to flag images captured by surveillance drones for human inspection. However, there is no clear line between this use case and one that will result in fatalities. The Pentagon states that Project Maven’s computer vision tech is intended for deployment in war zones like Iraq and Syria.

Schmidt told the committee that he was not involved in any decision-making on Project Maven, but his comments suggest that Google is headed for a reckoning over its involvement in military AI. The letter produced by the company’s employees said such work would tarnish its brand, compromise its moral values, and make it harder to recruit top talent. But as Schmidt makes clear, the US military has a huge motivation to work with the best minds in AI — and a good number of those currently work for Google.

“The world’s most prominent AI companies focus on gathering the data on which to train AI and the human capital to support and execute AI operations,” said Schmidt in his written testimony. “If DoD is to become ‘AI‑ready,’ it must continue down the pathway that Project Maven paved and create a foundation for similar projects to flourish... It is imperative the Department focus energy and attention on taking action now to ensure these technologies are developed by the U.S. military in an appropriate, ethical, and responsible framework.”