There's plenty of debate over the singularity — a hypothetical future moment where software becomes self-aware and smart beyond our capacity to understand. Some say it will be a boon for humanity; some foresee an Artificial Intelligence-driven apocalypse.

We already knew that Elon Musk was in the latter camp. Now we know that the SpaceX and Tesla entrepreneur thinks the A.I. doom is approaching faster than anyone suspects — within the next 5-10 years.

It all started last Friday, when noted virtual reality pioneer Jaron Lanier was featured on publisher John Brockman's site, Edge.org, discussing the potential threat of artificial intelligence in a post titled "The Myth of A.I." Following his thoughts are comments from a number of science and technology luminaries weighing in on the topic. Among those comments were the thoughts of Musk, who sounded a particularly alarming note about the threat of A.I.

"The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most," wrote Musk. "Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don't understand."

The problem: according to a Musk spokesperson contacted by Mashable, the comments were emailed to Brockman and were not intended to be made public on the site.

Soon after after the comment appeared, it was removed, but not before a screenshot of the comment was captured and posted on Reddit, effectively ensuring that Musk's supposedly private comments were preserved for all the Internet see.

Aside from the frighteningly near-term prediction, the other thing that seemed to give Musk's comment import was the mention of DeepMind, a very real company he has invested in that works in the artificial intelligence space.

"The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast," said Musk. "Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential."

Later, Musk went on to claim that his view is shared by others working in the space.

"I am not alone in thinking we should be worried," said Musk. "The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen ..."

We've covered Musk's recent comments about the potential dangers of A.I. before, when he variously compared the threat to nuclear weapons and a "demon" summoned by humanity. But none of those comments indicated that Musk believed our downfall at the hands of A.I. would come so soon.

To see what some other top minds think about the topic, it's worth reading the entire list of comments which still appear on the site, which includes the likes of Peter Diamandis (X Prize Foundation), George Church (Harvard professor and director of the Personal Genome Project) and Nathan Myhrvold (former chief technology officer at Microsoft).

Still, here's what puts Musk's comments in a special light: he's one of the few tech visionaries not just talking about humanity's future, but actually executing very difficult tasks that could help define it. His prediction is way ahead of the pack, too. Ray Kurzweil, chief proponent of the singularity theory, predicts that we'll see the technological event come to pass sometime around 2045 — decades later than Musk's prediction.

Still, whether you believe Musk's perspective is rooted in reality, or in too many science fiction novels, the very fact that the topic is now being discussed so seriously in the science and technology communities is a telling turn of events.

In an email to Mashable, Musk's spokesperson said the entrepreneur intends to publish a longer post on the topic of artificial intelligence soon.