Elon Musk is worried that AI will destroy humanity, and so he's decided to donate $10 million toward research into how we can keep artificial intelligence safe. Musk, the CEO of Tesla and SpaceX, has previously expressed concern that something like what happens in The Terminator could happen in real life. He's also said that AI is "potentially more dangerous than nukes." The purpose of this donation is to both prevent that from happening and to ensure that AI is used for good and to benefit humanity.

"You can construct scenarios where recovery of human civilization does not occur."

"It's best to try to prevent a negative circumstance from occurring than to wait for it to occur and then be reactive," Musk says. "This is a case where the range of negative outcomes, some of them are quite severe. It's not clear whether we'd be able to recover from some of these negative outcomes. In fact, you can construct scenarios where recovery of human civilization does not occur. When the risk is that severe, it seems like you should be proactive and not reactive."

The money will be distributed to researchers through grant competitions, with the application process beginning on Monday. Overseeing the distribution is the Future of Life Institute, which describes itself as an organization "working to mitigate existential risks facing humanity." The institute doesn't care whether researchers are in academia or with a company — it just wants the money distributed to people with what it considers good ideas. "The plan is to award the majority of the grant funds to AI researchers," the institute explains, "and the remainder to AI-related research involving other fields such as economics, law, ethics, and policy."

Funding research on artificial intelligence safety. It's all fun & games until someone loses an I http://t.co/t1aGnrTU21 — Elon Musk (@elonmusk) January 15, 2015

Musk was among the high-profile signatories of an open letter from FLI earlier this week warning scientists that AI must not only grow more capable, but more beneficial. This is exactly what his money is going toward. While FLI doesn't provide any specifics about what it wants to see out of the research, it did provide a long list of guidelines and priorities for researchers in its letter earlier this week. Certainly, some of these ideas may filter into the proposals that it ultimately accepts and distributes Musk's funding too.

FLI's suggested research priorities include optimizing AI's economic impact so that it avoids destroying jobs in a way that will increase income inequality, determining how AI should handle ethical questions like those surrounding autonomous vehicle collisions, and how human control can be guaranteed over something like a weapons system.