Oxford philosopher Nick Bostrom (left) and DeepMind CEO Demis Hassabis (right). YouTube/Future of Life The head of Google DeepMind is worried that technology companies and individuals will fail to co-ordinate on the development of artificial superintelligence — defined by Oxford philosopher Nick Bostrom as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."

DeepMind CEO Demis Hassabis, whose company is arguably at the front of the race to develop human-level artificial intelligence (AI), said at The Future of Life's Beneficial AI conference in January that he wants (and expects) superintelligence to be created.

But it's important that technology companies and individuals are open and transparent about their AI research, according to Hassabis.

When superintelligence is close to being developed, the Cambridge graduate and chess grandmaster said that there might be a need for the leader of the AI race to "slow down ... at the end." This would give societies a chance to adapt to superintelligence gradually, while providing scientists with the opportunity to carry out further research that could mitigate the risks of developing harmful AI.

"The [AI] control problems and other issues; they're very difficult but I think we can solve them," said Hassabis on a panel with eight other AI leaders. "The problem is the co-ordination problem of making sure there is enough time to slow down at the end."

Hassabis went on to paint a picture of one AI group slowing down their AI development efforts to let experts think about the situation for five years or so while another group raced ahead.

"What about all the other teams that are reading the papers and are not going to do that [stop and think] while you’re thinking?" said Hassabis. "This is what I worry about quite a lot because it seems like that co-ordination problem is quite difficult."

Off the back of the Beneficial AI conference, Hassabis signed a set of 23 principles for the safe development of AI, alongside Elon Musk, who is the billionaire cofounder behind Tesla, SpaceX, and PayPal, and cosmologist Stephen Hawking.

Last September, technology firms including Facebook, Microsoft, Amazon, and DeepMind set up a group called the Partnership on AI in a bid to ensure that self-thinking machines are developed safely and ethically. Apple, which has traditionally been very secretive about all aspects of its research, including AI, announced it had become a member last month.

Hassabis suggested that members of the Partnership on AI should agree on a set of protocols or procedures for the development of AI.

But, in years to come, it might not just be the big technology companies that are developing advanced forms of AI. Hassabis warned that if hardware continues advancing at its current rate then there could become a point where someone in their garage could create a superintelligence. "In say 50/100 years time, someone, a kid in their garage, could create a seed AI," said Hassabis. "I think that would be a lot more difficult to co-ordinate."

Hassabis did not specify when he thinks superintelligence will be developed but he did say it's likely to happen within a a matter of years of when machines achieve human levels of intelligence — something that AI author Ray Kurzweil expects to happen by 2029. If you combine those two predictions, then superintelligence could be developed by 2040 or earlier.

Musk thinks machines will become as smart as the "sum of humanity" just days after they become as smart as the most intelligent human on the planet.

This transition period, where machines start to surpass human-level intelligence and become superintelligent, is likely to be one of the most defining points in the history of humanity. It'll be a tense moment. The human brain only has so much bandwidth but with a surplus of hardware at their disposal, machines can go on becoming increasingly intelligent.