I was contacted by a staff writer from the online newsmagazine The Daily Dot. He is writing a story at the intersection of computer superintelligence and religion, and asked me a few questions. Here are my answers to his queries.

Dear Dylan:

I see you’re on a tight deadline so I’ll just answer your questions off the top of my head. A disclaimer though, all these questions really demand a dissertation length response. My answers are below:

1) Is there any religious suggestion (Biblical or otherwise) that humanity will face something like the Singularity?

There is no specific religious suggestion that we’ll face a technological singularity. In fact, ancient scriptures from various religions say virtually nothing about science and technology, and what they do say about them is usually wrong (the earth doesn’t move, is at the center of the solar system, is 6,000 years old, etc.)

Still people interpret their religious scriptures, revelations, and beliefs in all sorts of ways. So a fundamentalist might say that the singularity is the end of the world as foretold by the Book of Revelations or something like that. Also there is a Christian Transhumanist Association and a Mormon Transhumanist Association and some religious thinkers are scurrying to claim the singularity for their very own. But a prediction of a Technological Singularity—absolutely not. The simple fact is that the authors of ancient scriptures in all religious traditions obviously knew nothing of modern science. Thus they couldn’t predict anything like a technological singularity.

2) How realistic do you personally think the arrival of some sort of superintelligence (SI) is? How “alive” would it seem to you?

The arrival of SI is virtually inevitable, assuming we avoid all sorts of extinction scenarios—killer asteroids, out of control viruses, nuclear war, deadly climate change, a new Dark Ages that puts an end to science, etc. Once you adopt an evolutionary point of view and recognize the exponential growth of culture, especially of science and technology, it is easy to see that we will create intelligences must smarter than ourselves. So if we survive and science advances, then superintelligence (SI) is on the way. And that is some why very smart people like Bill Gates, Stephen Hawking, Nick Bostrom, Ray Kurzweil and others are talking about SI.

I’m not exactly sure what you mean by your “How alive would it seem to you” question, but I think you’re assuming we would be different from these SIs. Instead there is a good chance we’ll become them through neural implants, or by some uploading scenario. This raises the question of what its like to be superintelligent, or in your words, how alive you would feel as one. Of course I don’t know the answer since I’m not superintelligent! But I’d guess you would feel more alive if you were more intelligent. I think dogs feel more alive than rocks, humans more alive than dogs, and I think SIs would feel more alive than us because they would have greater intelligence and consciousness.

If the SIs are different from us—imagine say a super smart computer or robot—our assessment of how alive it would be would depend on: 1) how receptive we were to attributing consciousness to such beings; and 2) how alive they actually seemed to be. Your laptop doesn’t seem too alive to you, but Honda’s Asimo seems more alive, and Hal from 2001 or Mr. Data from Star Trek seem even more alive, and a super SI, like most people’s god is supposed to be, would seem really alive.

But again I think we’ll merge with machine consciousness. In other words SIs will replace us or we’ll become them, depending on how you look at it.

3) Assuming we can communicate with such a superintelligence in our own natural human language, what might be the thinking that goes into preaching to and “saving” it?

Thinkers disagree about this. Zoltan Istvan thinks that we will inevitably try to control SIs and teach them our ways, which may include teaching them about our gods. Christopher J. Benek, co-founder and Chair of the Christian Transhumanist Association, thinks that AI, by possibly eradicating poverty, war, and disease, might lead humans to becoming more holy. But other Christian thinkers believe AIs are machines without souls, and cannot be saved.

Of course, like most philosophers, I don’t believe in souls, and the only way for there to be a good future is if we save ourselves. No gods will save us because there are no gods—unless we become gods.

4) Are you aware of any “laws” or understandings of computer science that would make it impossible for software to hold religious beliefs?

No. I assume you can program a SI to “believe” almost anything. (And you can try to program humans to believe things too.) I suppose you could also write programs without religious beliefs. But I am a philosopher and I don’t know much about what computer scientists call “machine learning.” You would have to ask one of them on this one.

5) How might a religious superintelligence operate? Would be it benign?

It depends on what you mean by “religious.” I can’t imagine a SI will be impressed by the ancient fables or superstitions of provincial people from long ago. So I can’t imagine a Si will find its answers in Jesus or Mohammed. But if by religious you mean loving your neighbor, having compassion, being moral or searching for the meaning of life, I can imagine SIs that are religious in this sense. Perhaps their greater levels of consciousness will lead them to being more loving, moral, and compassionate. Perhaps such beings will search for meaning—I can imagine our intelligent descendents doing this. In this sense you might say they are religious.

But again they won’t be religious if you mean they think Jesus died for their sins, or an angel led Joseph Smith to uncover and translate gold plates, or that Mohammed flew into heaven in a chariot. SIs would be too smart to accept such things.

As for “benign,” I suppose this would depend on its programming. So for example Eliezer Yudkowsky has written an book-length guide to creating “Friendly AI.” (As a non-specialist I am in no position to judge the feasibility of such a project.) Or perhaps something like Asimov’s 3 laws of robotics would be enough. This might also depend on whether morality follows from super-rationality. In other words would SIs conclude that it is rational to be moral. Most moral philosophers think morality is rational in some sense. Let’s hope that as SIs become more intelligent, they’ll also become more moral. Or, if we merge with our technology, let’s hope that we become more moral.

And that is the future survival and flourishing of our descendants. We must become more intelligent and more moral. Traditional religion will not save us, and it will disappear in its current form like so much else after SIs arrive. In the end, only we can save ourselves.