Yesterday, SpaceX and Telsa motors founder Elon Musk donated $10 million to help save the world – or so he thinks.

Musk’s donation went to the Future of Life Institute (FLI), a “volunteer-run research and outreach organization working to mitigate existential risks facing humanity.” To that end, Musk’s money will be distributed to like-minded researchers around the world. But what exactly are these “existential risks” humanity is supposedly pitted against?

As the memory storage and processing of computers steadily approaches that of the human brain, some predict that an artificial “superintelligence” is just on the horizon. And while the prospect has the scientific community buzzing about the possibilities, some academics are hesitant. Musk and others see artificial intelligence as a dangerous new frontier – and perhaps a threat comparable to nuclear war. Crazy? Maybe not, according to a growing list of prominent scientific thinkers.

"There are seven billion of us on this little spinning ball in space. And we have so much opportunity," MIT professor and FLI founder Max Tegmark told the Atlantic. "We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out."

And the FLI isn’t just some social club for rich weirdos. Stephen Hawking and Morgan Freeman are both on the organization’s scientific advisory board, bringing brain power and star power to its support base. Skype creator Jaan Tallinn co-founded the group. The rest of the board is comprised of academics with pedigrees from Harvard, MIT, and Cambridge University.

Oxford University’s Nick Bostrom, who is also on the board, wrote an entire book on the subject of AI takeover: Superintelligence: Paths, Dangers, Strategies. In its preface, he writes:

“In principle, we could build a kind of superintelligence that would protect human values. We would certainly have strong reason to do so. In practice, the control problem – the problem of how to control what the superintelligence would do – looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

In the works of science-fiction writer Isaac Asimov, intelligent machines are bound by “The Three Laws of Robotics,” which forbid them to cause harm to humans. But that wouldn’t necessarily work in the real world, Bostrom writes. He suggests that superintelligences might respond to human requests with perverse instantiation – that is, they could achieve a desired outcome by unintended means. For example, a superintelligence programmed to make us happy would choose the most efficient and effective way of doing so – by implanting electrodes into the pleasure centers of our brains.

As dire as it all sounds, the FLI’s stated goal isn’t to halt the progress of artificial intelligence research. Instead, it hopes to ensure that AI systems remain “robust and beneficial” to human society.

"Building advanced AI is like launching a rocket,” Tallinn stated in a press release. “The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering."

But if superintelligent AI really does pose a threat to mankind, how do we assess that threat? How can humans anticipate the actions of a fundamentally more intelligent machine? Of a being that became sentient not through Darwinian natural selection, but by human ingenuity?

Get the Monitor Stories you care about delivered to your inbox. By signing up, you agree to our Privacy Policy

The members of FLI don’t have the answers. They just want the scientific community to start asking the questions, Tegmark says.

"The reason we call it The Future of Life Institute and not the Existential Risk Institute is we want to emphasize the positive," Tegmark told the Atlantic. "We humans spend 99.9999 percent of our attention on short-term things, and a very small amount of our attention on the future."