WASHINGTON: Google’s withdrawal from the military’s Project Maven was a one-time “reset” that will not hinder growing cooperation on a wide range of other projects, the company’s senior vice president for global affairs said today. “We’re eager to do more,” Senior VP Kent Walker said.

In what turned into an unlikely lovefest this morning, Kent Walker, Google’s current senior vice president for global affairs; Eric Schmidt, formerly Google’s chairman and now the chairman of the Pentagon-appointed Defense Innovation Board; and Lt. Gen. John “Jack” Shanahan, the former head of Project Maven now leading the Pentagon’s year-old Joint AI Center, sat side by side.

They were on stage at an artificial intelligence conference hosted by the congressionally chartered National Security Commission on Artificial Intelligence.

“We did a very successful collaboration with the Google team on this [i.e. Maven],” Shanahan said with no hint of bitterness. “We got all the way to the end of the contract” – Google’s mid- 2018 announcement was a pledge not to renew its current contract, which it kept working on until it wrapped up early this year – “and we got products that we were very pleased with.”

“What was happening internal to the company, how that played out, is a little bit of a different story,” Shanahan acknowledged. “Even some of the software engineers on that project, they got to the point they almost felt a little bit ostracized, because others criticized them for working with the Department of Defense. But day to day… we got tremendous support from Google.”

“My objective in this panel [is] to put to bed this notion that somehow Silicon Valley wouldn’t work with the military,” Schmidt said. “We’ve clearly seen examples — small companies, large companies.”

“It’s been frustrating to hear concerns around our commitment to national security and defense, and so I wanted to set the record straight,” said Walker, himself the son of a career military man. On Maven, he said, “we decided to press the reset button until we had the opportunity to develop our own set of AI principles [and] internal standards and review processes. But that was a decision focused on a discrete contract, not a broader statement about our willingness or our history of working with the Department of Defense.”

Today, Google is working with Shanahan’s JAIC on projects “from cybersecurity to healthcare to business automation,” he said. It’s working with DARPA on “fundamental projects” to make AI less brittle, combat deep fakes, and keep computer hardware improving despite obstacles to Moore’s Law. It’s “priming the pump” for further work on modeling & simulation, training, and recruiting. And it’s “actively” pursuing additional national security certifications.

“We are looking forward to working more closely together in the future,” Kent said.

True, Google has decided not to work on weapons systems, Kent acknowledged: “We recognize the limits of our experience in that area.” But that’s just one aspect of a general caution about potential unintended consequences of technology, he said. Google carefully reviewed a project to teach computers to read lips, for example, until it was confident the technology could not be applied for long-range surveillance, only for aiding the deaf in one-on-one conversations. It’s working on safeguards for facial-recognition systems. And, he emphasized, unlike many competitors, it has limited its operations in China to advertising and open-source research, neither a particularly tempting target for espionage.

(That said, Google has also been bitterly criticized — by human rights activists and its own employees – for developing censored search algorithms for Chinese users. Former Deputy Defense Secretary Bob Work has blasted the company for conducting AI research in China that he said would indirectly benefit the People’s Liberation Army).

“We are a proud American company,” Walker said, in words that must have been music to the ears of Pentagon officials. “We are committed to the cause of national defense for the United States of America, for our allies, and for peace and safety and security in the world.”

That said, “we approach that task thoughtfully, as we do with… a variety of advanced technologies,” he went on. “We want to be thoughtful and have clear frameworks and transparency and understanding as we move forward.”

The five AI principles published last week by Schmidt’s Defense Innovation Board are a good starting point for ethical collaboration between industry and military, Walker and Shanahan agreed. So too, Shanahan added, is the National Security Commission’s own interim report released yesterday, which he said calls for “a shared sense of responsibility about our AI future, a shared vision about the importance of trust and transparency.”

Part of the problem that affected Project Maven, Shanahan said, was a lack of transparency – though he added that the government was willing to talk and it was Google’s initial reluctance to discuss their role in public that led to public misunderstanding. “It is not a weapons project,” he emphasized, but an effort to analyze huge amounts of surveillance video collected by unarmed drones (although, of course, that analysis could detect targets for other forces to strike).

“We lost the narrative very quickly,” Shanahan said. “I view what happened with Google and Maven as a little bit of a canary in a coal mine…It would have happened to somebody else at some point, but this idea of transparency and a willingness to talk about what each side is trying to achieve may be the biggest lessons of all.”

“The fact that it happened when it did, as opposed to on the verge of a conflict or a crisis” — [well, other than the generation-long war on terrorism – the editors] means “we’ve gotten some of that out of the way,” Shanahan said.

Government, industry, and academia need to work to restore the close cooperation that prevailed from the 1950s, through the rise of Silicon Valley from a group of government contractors to a global giant, but then degenerated as the private sector raced ahead of the military amidst post-9/11 concerns about privacy, surveillance, and collateral damage.

“Snowden, Apple encryption, mismatched operating tempo and agility, different business models, general mistrust between the government and industry — we started talking past each other instead of with each other,” Shanahan said. “Industry is moving so much faster than the Department of Defense, [so] we’re playing perpetual catch up — and some employees in the tech industry see no compelling reason to work with the Department of Defense, and even those who want to work with DoD — which I’d say is far more than sometimes is portrayed… we don’t make it easy for them.”

All that said, Shanahan argued, the US government still does a far better job at having an open discussion about ethics, technology, and the military than do its great-power rivals. Consider the Pentagon-appointed Defense Advisory Board’s lengthy research and public hearings before publishing its recommendations on AI ethics, he said:

“China and Russia did not embark on a 15-month process involving public hearings and discussion about the ethical, safe, and lawful use of artificial intelligence,” Shanahan said. “They’re not doing it, and I don’t expect they ever will.”