One of the creators of the AI research company famed for building the pioneering AlphaGo AI spells out the technology's impact and future development.

DeepMind ... in less than two minutes Watch Now

The 2016 victory by a Google-built AI at the notoriously complex game of Go was a bold demonstration of the power of modern machine learning.

That triumphant AlphaGo system, created by AI research group Google DeepMind, confounded expectations that computers were years away from beating a human champion.

But as significant as that achievement was, DeepMind's co-founder Demis Hassabis expects it will be dwarfed by how AI will transform society in the years to come.

Hassabis spelt out his vision for the future of AI at the Economist Innovation Summit in London.

AI will save us from ourselves

"I would actually be very pessimistic about the world if something like AI wasn't coming down the road," he said.

"The reason I say that is that if you look at the challenges that confront society: climate change, sustainability, mass inequality -- which is getting worse -- diseases, and healthcare, we're not making progress anywhere near fast enough in any of these areas.

"Either we need an exponential improvement in human behavior -- less selfishness, less short-termism, more collaboration, more generosity -- or we need an exponential improvement in technology.

"If you look at current geopolitics, I don't think we're going to be getting an exponential improvement in human behavior any time soon.

"That's why we need a quantum leap in technology like AI."

AI will lead to Nobel Prize-winning scientific breakthroughs

Hassabis' confidence that AI can offset the worst effects of human greed and selfishness stems from how readily the technology can be applied to solving intractable problems, such as preventing catastrophic climate change.

Image: Google

"I think about AI as a very powerful tool. What I'm most excited about is applying those tools to science and accelerating breakthroughs," he said.

Today's machine-learning and related AI technologies make it possible to carry out tasks such as image recognition and to find patterns in vast amounts of data, he said.

But he's particularly enthused about the potential applications of AI's ability to optimize tasks that would otherwise be overwhelmingly complex, as demonstrated by AlphaGo's success at a game where there are more potential moves than there are atoms in the universe.

"You can think about huge combinatorial spaces and you're trying to find a path through. Obviously, games like Chess and Go are like that, there's such a huge number of possibilities you can't brute force the right solution.

"There are lots of areas in science that have a similar structure. I think about areas like material and drug design, where often what you're doing is painstakingly putting together all sorts of combination of compounds and testing them for their properties."

The impact of breakthroughs in areas like material design could be profound, according to Hassabis.

"It's hypothesized, for example, there could be a room-temperature superconductor that could revolutionize power and energy, but we don't know what that compound is currently.

"This is what I'm really excited about and I think what we're going to see over the next 10 years is some really huge, what I would call Nobel Prize-winning breakthroughs in some of these areas."

For its part, DeepMind is looking at how machine learning and other AI-related technologies can be applied to areas such as protein folding and quantum chemistry, he said.

Hassabis also acknowledged that these systems had the potential to be used to cause harm, and raised the possibility that at some stage, in "five to 10 years time", there could be an argument to keep some research out of the public domain to prevent it being exploited by "bad actors".

Deep learning is not enough to crack general AI

Creating a machine with a general intelligence similar to our own will require a wider range of technologies than the deep-learning systems that have powered many recent breakthroughs.

"Deep learning is an amazing technology and hugely useful in itself, but in my opinion it's definitely not enough to solve AI, [not] by a long shot," he said.

"I would regard it as one component, maybe with another dozen or half-a-dozen breakthroughs we're going to need like that. There's a lot more innovation that's required.

"The brain is one integrated system but you've got different parts of the brain responsible for different things.

"You've got the hippocampus for episodic memory, the pre-frontal cortex for your control, and so on.

"You can think about deep learning as it currently is today as the equivalent in the brain to our sensory cortices: our visual cortex or auditory cortex.

"But, of course, true intelligence is a lot more than just that, you have to recombine it into higher-level thinking and symbolic reasoning, a lot of the things classical AI tried to deal with in the 80s.

"One way you can think about our research program is [that it's investigating] 'Can we build out from our perception, using deep-learning systems and learning from first principles? Can we build out all the way to high-level thinking and symbolic thinking?'.

"In order to do that we need to crack problems like learning concepts, things that humans find effortless but our current learning systems can't do."

DeepMind is researching how to advance AI in areas that would allow systems to reason at a level that's not possible today and to transfer knowledge between domains, much the same way a human who's driven a car can apply that knowledge to drive a van.

"We're trying to make breakthroughs in new types of technologies that we think are going to be required for things like concept formation, how we bring language understanding into what are currently pre-linguistic systems.

"AlphaGo doesn't understand language but we would like them to build up to this symbolic level of reasoning -- maths, language, and logic. So that's a big part of our work," he said, adding DeepMind is also working on how to make learning more efficient, in order to reduce the huge volume of data needed to train deep-learning systems today.

Read more: