Vincent Fournier/Gallerystock

However you look at it, the future appears bleak. The world is under immense stress environmentally, economically and politically. It’s hard to know what to fear the most. Even our own existence is no longer certain. Threats loom from many possible directions: a giant asteroid strike, global warming, a new plague, or nanomachines going rogue and turning everything into grey goo.

Another threat is artificial intelligence. In December 2014, Stephen Hawking told the BBC that “the development of full artificial intelligence could spell the end of the human race… It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Last year, he followed that up by saying that AI is likely “either the best or worst thing ever to happen to humanity”.

Other prominent people, including Elon Musk, Bill Gates and Steve Wozniak, have made similar predictions about the risk AI poses to humanity. Nevertheless, billions of dollars continue to be funnelled into AI research. And stunning advances are being made. In a landmark match in March, the Go master Lee Sedol lost 4-1 to the AlphaGo computer. In many other areas, from driving taxis on the ground to winning dogfights in the air, computers are starting to take over from humans.

Hawking’s fears revolve around the idea of the technological singularity. This is the point in time at which machine intelligence starts to take off, and a new more intelligent species starts to inhabit Earth. We can trace the idea of the technological singularity back to a number of different thinkers including John von …