WASHINGTON — Fei-Fei Li is among the brightest stars in the burgeoning field of artificial intelligence, somehow managing to hold down two demanding jobs simultaneously: head of Stanford University’s A.I. lab and chief scientist for A.I. at Google Cloud, one of the search giant’s most promising enterprises.

Yet last September, when nervous company officials discussed how to speak publicly about Google’s first major A.I. contract with the Pentagon, Dr. Li strongly advised shunning those two potent letters.

“Avoid at ALL COSTS any mention or implication of AI,” she wrote in an email to colleagues reviewed by The New York Times. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”

Dr. Li’s concern about the implications of military contracts for Google has proved prescient. The company’s relationship with the Defense Department since it won a share of the contract for the Maven program, which uses artificial intelligence to interpret video images and could be used to improve the targeting of drone strikes, has touched off an existential crisis, according to emails and documents reviewed by The Times as well as interviews with about a dozen current and former Google employees.