Elon Musk didn’t make his fame and fortune by being cautious. The tech mogul behind Tesla Motors, SpaceX, and other ventures has big plans for how to transform the world — and big thoughts for how things can go wrong. Case in point: his repeated warnings that A.I. could destroy humanity.

This past Saturday was yet another example of Musk voicing concerns about the rise of the machines, but this time he was speaking directly to America’s political leaders. At the recent meeting of the country’s governors, the National Governors Association, Musk told a room full of state leaders that “A.I. is a fundamental risk to the existence of human civilization, and I don’t think people fully appreciate that.”

This was perhaps the most aggressive alarm Musk has raised about A.I. so far. Telling a bunch of techies there are problems with A.I. is one thing. Telling a roomful of politicians that intelligent machines pose an “existential threat” is, well, something much more severe.

But here’s the crucial question: Is Musk right to be worried?

Not according to the scientific community. Pedro Domingos, an A.I. researcher based at the University of Washington in Seattle, summed up his thoughts on Twitter with a single word: “sigh.”

François Chollet, who works on deep learning research at Google, had a longer rebuttal. He noted that A.I. was not inherently problematic, but it could exacerbate existing problems with how humans use and relate to technology. “[A.I. and machine learning] makes a few existing threats worse,” he tweeted. “Unclear that it creates any new ones.”

Chollet’s colleague, David Ha, also pushed back against Musk’s concerns.

Subbarao Kambhampati, a professor of computer science at Arizona State University, told Inverse that, like the majority of the A.I. community, he believes Musk is indeed being needlessly alarmist. “While there needs to be an open discussion about the societal impacts of A.I. technology, much of Mr. Musk’s oft-repeated concerns seem to focus on the rather far-fetched, super-intelligence take-over scenarios,” he said.

Taking Musk’s comments seriously depends on the notion that, because he heads Tesla and the brain-computer interface company Neuralink, that have developed and implemented machine-learning technologies, he knows firsthand how real the potential of a Terminator-like future actually is.

Kambhampati strongly rejected that argument. He pointed to the Obama administration’s 2016 report on preparing for a future with artificial intelligence, which comprehensively examines potential social impacts of A.I., and discussed ways the government could introduce regulation to move development on a positive path. The report neglects to talk about “the super-intelligence worries that seem to animate Mr. Musk,” he said, precisely because it’s not a valid concern.

Kambhampati is the president of the Association for the Advancement of A.I. and a trustee for the Partnership on A.I., and he said these groups, and others like them, are more concerned with the realistic and short-term impacts of artificial intelligence. “You will see a conspicuous lack of obsession with super-intelligence worries,” he said.

“I can’t argue that a small set of people shouldn’t be thinking about these worries, however far-fetched,” he continued. “But Mr. Musk’s megaphone seems to be rather unnecessarily distorting the public debate, and that is quite unfortunate.”

The governors’ reactions to Musk highlight how much influence the billionaire wields. “I think a lot of us felt like we were in the presence of Alexander Graham Bell or Thomas Alva Edison,” Colorado Governor John Hickenlooper told NPR. “It was remarkable.”

NPR mentions some of the governors were skeptical of regulating A.I., if only because the technology is still in its infancy, so it’s an open question as to how seriously the room took Musk’s specific concern. But c’mon now, what could possibly go wrong by spreading alarmist ideas among America’s political leaders?