“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. —Eliezer Yudkowsky, research fellow, Machine Intelligence Research Institute”

Within the next 50 or 100 years, an AI might know more than the entire population of the planet. At that point, will control almost every connected device on the planet — will somehow rise in status to become more like a god, according to the leading experts on the future of artificial intelligence.

What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful – possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

Bostrom says artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

In the video below, MIT physicist and cosmologist Max Tegmark rightly emphasizes that the real problem is with the unforeseen consequences of developing highly competent AI. Artificial intelligence need not be evil and need not be encased in a robotic frame in order to wreak havoc. In Tegmark’s words, “the real risk with artificial general intelligence isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”

Da Vinci Code author Dan Brown says that AI and global consciousness will emerge to replace our human concept of the divine. “Humanity no longer needs God but may with the help of artificial intelligence develop a new form of collective consciousness that fulfills the role of religion.”

Brown made the provocative remark at the Frankfurt Book Fair in Germany where he was promoting his novel, ‘Origin,’ inspired by the question ‘Will God survive science?’, said Brown, adding that this had never happened in the history of humanity. ‘Are we naïve today to believe that the gods of the present will survive and be here in a hundred years?’ asks Brown in the video above.

SpaceX and Tesla founder Elon Musk, believes that we’ll conjuring up demons with AI superintelligence that could doom the human species.

Way ahead of either Bostrom or Dan Brown, Susan Schneider of the University of Connecticut and the Institute for Advanced Studies at Princeton is one of the few thinkers—outside the realm of science fiction— that have considered the notion that a super form of artificial intelligence is already out there, and has been for eons.

“I do not believe that most advanced alien civilizations will be biological,” Schneider says. “The most sophisticated civilizations will be postbiological, forms of artificial intelligence or alien superintelligence.”

The Daily Galaxy via The Atlantic and Venture Beat