WUZHEN, China/TOKYO Ke Jie, the world's No. 1 Go player, was not having a good day.

It was May 27, and the Chinese master of the board game was in the third bout of his match against AlphaGo, Google's artificial intelligence-driven program, at the Future of Go Summit in Wuzhen, China. Ke had already lost the first two games, and after having publicly stated that he could beat AlphaGo, this was one he simply could not lose.

But minute by minute, it became increasingly clear that he was being cornered. About two hours into the game, Ke left his seat. With his "thinking time" clock ticking away, he returned 10 minutes later, held his head in his hand, took off his glasses to wipe away tears, and made his next move. An hour later, Ke surrendered, becoming the second Go champ, after South Korea's Lee Sedol, to be defeated in public by AlphaGo.

When news of the defeat spread, there was a renewed sense of disappointment among people wary of the technology. Go is widely considered the most complex of all board games, and many thought AI would have particular difficulty besting humans at it. Seeing a Go champion lose, not once but twice, was enough to convince some that AI will soon become more intelligent than humans, stealing many of our jobs in the process.

Even before the Go champions were toppled, prominent figures from academia and industry were sounding the alarm over this prospect. British physicist Stephen Hawking, one of the world's foremost thinkers, warned in a 2014 interview with the BBC that the development of full artificial intelligence could "spell the end of the human race" and that humans, "who are limited by slow biological evolution, couldn't compete, and would be superseded."

Bill Gates, the founder of Microsoft, has called AI a threat that could grow beyond our ability to control.

THE BRIGHTER SIDE Demis Hassabis, co-founder and CEO of the Google-owned DeepMind Technologies, the company that developed AlphaGo, chooses to focus more on the opportunities the technology presents.

He says the ability of computers to replicate intelligence is an incredible tool, likening it to having the world's best research assistant at one's fingertips.

Hassabi spoke with The Nikkei about AI's potential -- both good and bad -- on the sidelines of the May 23-27 Go summit where Ke met his defeat.

Is intelligence computable? In other words, is it possible to compute everything that is going on in the brain? Well, this is an open research question, but the current -- my current -- betting is probably yes, because there doesn't seem to be anything noncomputable in the brain.

Demis Hassabis, co-founder and CEO of DeepMind Technologies

Do you know Roger Penrose? He's speculated for many years that maybe there's some kind of quantum effect in the brain. "Quantum consciousness," he called it. Right? If that was true, then it might be noncomputable.

But he collaborated with some top biologists to see if they could find any quantum effect in the microtubules or other parts of the biology, and no one's found anything that isn't classical computing. So, it suggests that it is computable, in the Alan Turing sense of the word.

It's just incredibly complex, but computable. So that's my current working assumption. But we might find otherwise as we go on this journey.

Probably the most important message I got from the summit was that AI is not here to work against humans but to help us solve the great challenges faster. Am I right? Yes, completely correct. That's been my dream since I was 11 years old. To help in very important areas of the world: climate, disease and other areas of science -- chemistry, biology, materials science -- to advance the world for the benefit of everyone.

And I think we just use games initially as a convenient way to measure our program and build things fast. But most things in the world are not "one person wins, one person loses." It's everyone can win. If we improve the climate or cure a disease, everybody wins. So those are the domains we ultimately want to apply this to.

It's like everybody could have the world's best research assistant with them.

Google and other companies are trying to democratize AI, providing all the tools and computing resources through the cloud. But doesn't that increase the possibility that these capabilities will be used by people with ill intent? I think this is a really important question, and it's not easy to answer because, of course, you want as many people as possible to benefit from it. That's why we openly publish everything. So much stuff is open-source, like TensorFlow (an open-source library for machine learning developed by Google), all these things. So I think that's definitely good for the research community.

But the problem is that, yes, there might be bad actors in the world. So as things get more sophisticated, the community is going to have to think about that, about how to address this problem. One way to address it would be to publish less and open-source less. But, of course, that has its own consequences in terms of ... for the good aspects.

It's a tricky trade-off. I don't have good answers for that at the moment. But again, it's something that should be debated, and ethics should be thought about for that.

Many businesses claim they are using AI without elaborating on what it really is, probably in part to lure investors. Are we in an AI bubble? Yeah.

How concerned are you about this? I don't like it ... because, exactly as you say, every company is saying they're using AI, when some of them don't even know what that means. But because it works with investors, everybody is saying this now.

I think that's pretty bad, and I think there'll be a lot of disappointment. AI has been through hype cycles before.

They call it "AI winter." For some people, they're viewing this as the same as what happened the last two times, that we're in another bubble, and then there will be a winter.

It's funny. I think both things are true. We are in a bubble, but it's real this time. It's just that we're not as far as the bubble thinks we are, and lots of people are just using [AI] as a buzzword. But I really believe this time that we're on the right ladder. ... I think there won't be another winter.

Interviewed by Nikkei staff writer Joshua Ogawa