Imagine you had a chance to tell 50 of the most powerful politicians in America what urgent problem you think needs prompt government action. Elon Musk had that chance this past weekend at the National Governors Association Summer Meeting in Rhode Island. He chose to recommend the gubernatorial assembly get serious about preventing artificial intelligence from wiping out humanity.

“AI is a fundamental existential risk for human civilization and I don’t think people fully appreciate that,” Musk said. He asked the governors to consider a hypothetical scenario in which a stock-trading program orchestrated the 2014 missile strike that downed a Malaysian airliner over Ukraine—just to boost its portfolio. And he called for the establishment of a new government regulator that would force companies building artificial intelligence technology to slow down. “When the regulator’s convinced it’s safe to proceed then you can go, but otherwise slow down,” he said.

Musk’s remarks made for an enlivening few minutes on a day otherwise concerned with more quotidian matters such as healthcare and education. But Musk’s call to action was something of a missed opportunity. People who spend more time working on artificial intelligence than the car, space, and solar entrepreneur say his eschatological scenarios risk distracting from more pressing concerns as artificial intelligence technology percolates into every industry.

Pedro Domingos, a professor who works on machine learning at the University of Washington, summed up his response to Musk’s talk on Twitter with a single word: Sigh. “Many of us have tried to educate him and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent,” Domingos says. America’s governmental chief executives would be better advised to consider the negative effects of today’s limited AI, such as how it is giving disproportionate market power to a few large tech companies, he says. Iyad Rahwan, who works on matters of AI and society at MIT, agrees. Rather than worrying about trading bots eventually becoming smart enough to start wars as an investment strategy, we should consider how humans might today use dumb bots to spread misinformation online, he says.

Rahwan doesn’t deny that Musk's nightmare scenarios could eventually happen, but says attending to today’s AI challenges is the most pragmatic way to prepare. “By focusing on the short-term questions, we can scaffold a regulatory architecture that might help with the more unpredictable, super-intelligent AI scenarios.”

Musk has spoken out before about AI end times, in 2014 he likened working on the technology to “summoning the demon.” His propensity for raising sci-fi scenarios comes despite being very directly exposed to some of the near-term questions raised by artificial intelligence. “It’s always interesting hearing Elon Musk talk about AI killing us when a person died in a car he built that was self-driving,” says Ryan Calo, who works on policy issues related to robotics at the University of Washington.

He’s referring to the death of a Tesla driver in Florida last year when the car’s Autopilot system failed to see a tractor trailer blocking the road. Calo has been calling for a new government agency to think about AI longer than Musk—he proposed a Federal Robotics Commission in 2014—but wants it to focus on questions like how safe autonomous vehicles need to be and the privacy and ethical questions raised by smart machines such as autonomous drones. “Artificial intelligence is something policy makers should pay attention to,” Calo says. “But focusing on the existential threat is doubly distracting from it’s potential for good and the real-world problems it’s creating today and in the near term.”

In Rhode Island Saturday, Musk’s comments on AI sometimes elicited what sounded like awkward laughter from the assembled governors. And when Doug Ducey, the Republican governor of Arizona, questioned his suggestion that a regulator should try to slow down companies working on AI the entrepreneur even briefly backpedaled. “Typically policymakers don’t get in front of entrepreneurs or innovators,” Ducey said, after noting he had spent much of his time in office trying to cut regulation. Musk responded that the new AI regulator should start only by studying the state of AI today—then doubled down on his main message. “I’m just talking about making sure there is awareness at the government level," he said. "I think once there is awareness people will be extremely afraid, as they should be.” We might also fear the risk of apocalyptic talk preventing awareness that society has more immediate AI problems to work on too.