When Elon Musk, the founder of Tesla and SpaceX, comments on the future, ears in the tech space perk up. But a weekend mini-rant from the futurist drew the attention of even some non-techies and revealed that he's more worried about an artificial intelligence (A.I.) apocalypse than he's let on in recent months.

Posting his thoughts to Twitter on Saturday, after recommending a book about A.I., Musk made what might be the most controversial technology statement of his career: "We need to be super careful with A.I. Potentially more dangerous than nukes."

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes. — Elon Musk (@elonmusk) August 3, 2014

Others, like Google's Ray Kurzweil, have discussed a technological "singularity," in which A.I.'s take over from humans, but rarely has such a high profile voice with real ties to the technology business put the prospect in such stark terms.

To be fair, Musk's thoughts should be considered within the context he made them, that is, suggesting the book Superintelligence: Paths, Dangers, Strategies, a work by Nick Bostrom that asks major questions about how humanity will cope with super-intelligent computers in the future.

Nevertheless, the comparison of A.I. to nuclear weapons, a threat that has cast a worrying shadow over much of the last 30 years in terms of humanity's longevity possibly being cut short by a nuclear war, immediately raises a couple of questions.

The first, and most likely from many quarters, will be to question Musk's future-casting. Some may use Musk's A.I. concerns — which remain fantastical to many — as proof that his predictions regarding electric cars and commercial space travel are the visions of someone who has seen too many science fiction films. "If Musk really thinks robots might destroy humanity, maybe we need to dismiss his long view thoughts on other technologies." Those essays are likely already being written.

The other, and perhaps more troubling, is to consider that Musk's comparison of A.I. to nukes is apt. What if Musk, empowered by rare insight from his exclusive perch guiding the very real future of space travel and automobiles, really has an accurate line on the future of A.I.?

Later, doubling down on his initial tweet, Musk wrote, "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable."

Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable — Elon Musk (@elonmusk) August 3, 2014

In recent years, Musk's most science fiction-inspired comments have revolved around colonizing Mars, but this latest comment, and the one he made back in June about fearing a "Terminator" future, indicate that this is a serious issue for the tech mogul. As for whether his concerns hold any weight, we can't be sure, just yet, but Musk is hedging his bets by investing in an artificial intelligence research company called Vicarious.

Apparently, although not as vocal about it, others in the tech space agree with Musk's investment approach toward super-intelligent machines. Investors in Vicarious include the likes of Facebook's Mark Zuckerberg and Amazon's Jeff Bezos.