Over the last week or so the tech media has been energetically propagating a quote by Silicon Valley hero Elon Musk, in which he equates AI development with summoning demons:

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out. “

I have the utmost admiration for Elon’s work with SpaceX, Tesla, and his various other enterprises. However, as someone who has devoted decades of his life working toward the creation of beneficial, massively superhumanly intelligent AI systems, his anti-AI tirade obviously didn’t make me very happy.

My initial reaction to Elon’s widely publicized demonization of AI was to write an article titled “What Do Elon Musk and the Taliban Have in Common?”, which used to exist at the URL where you now find the article you’re currently reading. The basic theme of the article was how Elon Musk’s literal demonization of AI, reminded me of the way traditionalist religious fundamentalists attribute technologies they fear to Satan (e.g. the US, viewed as the key source of modern Western technology and culture, is commonly referred to as the Great Satan). I’ve now removed that article from this link, but it’s obviously still part of the public record (accessible via various Internet archives), and you can find it here in PDF form if you wish.

I got a lot of complaints about that article, including from some people I respect — which is why I replaced it with this one instead. I have to admit, that piece definitely wasn’t my highest journalistic moment. I hereby apologize to Elon Musk for writing an unnecessarily inflammatory article, while I was ticked off in the heat of the moment. I wrote that article in a “tit for tat” mood, feeling that Elon’s equation of the life’s work of myself and so many other AGI researchers to the evocation of Satanic forces, was unnecessary, weirdly inaccurate, and possibly dangerous. But “tit for tat”, while often a reasonable heuristic, is not always the best guide for action.

Anyway, though, while the rhetoric I used in that “Taliban” article was overblown, the core message I attempted to convey there is something I definitely stand by. Working toward superhuman AGI is not, in fact, much like summoning demons. AGI researchers are by and large highly rational people, working on advanced technology with beneficial, not selfish goals in mind.

Every new advance has both rewards and risks associated with it, and AGI is no exception. But a demon is by definition evil at its core. In the mythology Elon Musk’s quote refers to, a demon uses its evil trickery to dupe people into summoning it to help with their selfish problems. Then, in the end, the demon generally uses its evil cleverness to destroy the foolish people who invoked it, and carry out additional harm along the way.

AGI, on the other hand, is NOT by definition nor intrinsic nature evil.

AGI is no more intrinsically evil than previous huge advances like language, tools, civilization, mathematics or science have been intrinsically evil. Each of these huge advances had its risks and costs along with its benefits, but each opened amazing new doors relative to what had come before. These are the analogies we should be using when thinking about AGI — not demons or other mythical evil beings. Civilization largely “destroyed” the way of life that came before it, and AGI may end up largely “destroying” current human ways of doing things — but just as few modern humans want to go back to caveman-type living, very likely few post-Singularity humans or transhumans will want to go back to pre-AGI modes of living.

One thing I don’t talk about much is the death threats I’ve received, as an AGI researcher who is public about seeking to create superhuman intelligence and help launch a positive Singularity. I’ve received dozens over the years, including some from people associated with well-known futurist organizations that take an Elon-Musk-esque, “probably evil” stance toward AGI. I have been told in clear terms — by seemingly serious people in attendance at an AGI conference I organized some years ago — that if I ever seemed to be getting too close to really creating an AGI, then mafia types connected with certain famous Silicon Valley tech figures (no, not Elon Musk) would simply get rid of me (because, after all, on a utilitarian basis, the cost of losing one AI geek’s life means virtually nothing compared to the benefit of averting a scenario where evil AGIs take over the world and eliminate humans).

I’m sure Elon Musk has had absolutely nothing to do with nutcases making death threats against AGI researchers. However, having a Silicon Valley hero equate building AGI to summoning demonic forces, feels to me non-trivially likely to inflame such nutcases into more aggressive action. That is the main thing that irritates me about seeing Elon’s quote in the media everywhere.

I have spent the last several decades of my life working on the difficult but fantastically important problem of creating Artificial General Intelligence with capability at the human level and beyond — and I’m not going to stop because assorted crazies threaten me with death over it, nor because Elon Musk equates my work with demonic invocation! If someone does end up offing me because of my AGI work, someone else besides me will continue it. The number of AGI enthusiasts and hackers on the planet is definitely increasing exponentially.

The OpenCog source-code my team works on is open and now exists on a large number of peoples’ computers all around the world. But even if every instance of it were deleted, a few years later someone else would come along with a new codebase pushing in the same direction — or a new approach with even more promise. It is very unlikely anybody is going to stop the emergence of superhuman AGI, though of course it’s possible to slow things down for a few years by creating enough trouble for researchers.

The point of Elon’s comparison, obviously, was to highlight in a dramatic way the risks of AGI R&D. These risks are real and worthy of discussion, but equating AGI to demonic forces is really not a useful way to further such discussion. Some of my own views on how to best handle these potential risks are in the article Nine Ways to Bias Open-Source AGI Toward Friendliness by myself and Joel Pitt from a few years back. Also see this dialogue where I debate AGI risk and related topics with Luke Muehlhauser, the Executive Director of MIRI (formerly Singularity Institute of AI), an organization focused on the risks of AGI, and advocating a broadly Elon-Musk-eque view that progress on practical AGI work should be slowed or halted because of the risks. (However, I hasten to add that MIRI does not have the habit of using Biblical rhetoric to promote their ideas!)

Alongside the near-inevitability of radical AGI advance this century, another thing I’m pretty sure of is that this sort of controversy is going to continue — and is going to heat up massively once AGI gets palpably closer. Once there are proto-AGI demos on YouTube and on the TV news, showing AGI systems looking more and more obviously on the verge of crossing the line to human level intelligence, then the anti-AGI forces of all sorts are going to become a lot more vocal. At that stage, life may become a lot more dangerous and troublesome for AGI researchers — but also more exciting, due to the feeling of being on the verge of the most amazing breakthrough in human history … indeed, the first breakthrough in human history to go beyond “human” history as narrowly conceived and open things up to a broader, vastly richer and more interesting future.