Google DeepMind CEO Demis Hassabis. YouTube/Royal Television Society Artificial intelligence (AI) is being developed at a staggering pace by companies like DeepMind and a number of high-profile scientists have raised concerns about the future of the technology.

But scientists aren't the only ones that fear the worst, according to a long read on stopping the AI apocalypse in Vanity Fair on Sunday.

One of DeepMind's own investors allegedly joked that they should have shot AI guru Demis Hassabis in order to save the human race. It's important to stress that this was a *joke* and the investor obviously didn't mean it. But the joke highlights an interesting point. Some people are genuinely concerned that machines could end up outsmarting humans and doing away with them altogether.

Together with an army of neuroscientists and computer programmers, Hassabis is looking to create forms of superintelligence that can learn and think for themselves.

Some, including Elon Musk and Stephen Hawking, believe that one day these superintelligences could pose a threat to humanity if they decide that humans are no longer necessary. This sci-fi scenario could, however, be avoided if tech companies and governments take the right steps when developing AI. It's also worth noting that superintelligences could also find cures for cancer and reduce the world's energy consumption. No one really knows.

Despite that, one unknown investor is alleged to have joked that they should have "shot Hassabis on the spot" after they had a meeting with him. At least, that's what Peter Thiel reportedly told Vanity Fair's Maureen Dowd.

Thiel "told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race," Dowd wrote in her piece.

Hassabis is confident that scientists will develop superintelligences at some point but he's less clear on the time frame that this will happen in. It could be within the next few decades or it could take more than 100 years. He's also made it clear that he wants it to happen.

Shane Legg, who cofounded DeepMind (now owned by Google parent company Alphabet), has also admitted he has concerns about advanced forms of technology. He said in an interview in 2014: "I think human extinction will probably occur, and technology will likely play a part in this."

Last October, Oxford philosopher Nick Bostrom said that DeepMind is winning the race to develop human-level AI. The company, which employs approximately 400 people in King's Cross, is perhaps best known for developing an AI agent that defeated the world champion of the ancient Chinese board, Go. However, it's also applying its AI to other areas, including healthcare and energy management.

Once human-level AI is developed, many in the field believe that machines will quickly go on and develop forms of superintelligence.

"I think it partly depends on the architecture that end up delivering human-level AI," said Hassabis earlier this year. "So the kind of neuroscience inspired AI that we seem to be building at the moment, that needs to be trained and have experience and other things to gain knowledge. It may be in the order of a few years, possibly even a decade."

DeepMind did not immediately respond to Business Insider's request for comment.