Fan Hui: What I learned from losing to DeepMind’s AlphaGo Go is the oldest known board game in the world, and computer surpassed man’s ability to play it in 2015

In Ridley Scott’s seminal 1982 film Blade Runner, the sinister Tyrell corporation manufactures eerily human-like robots under the company slogan ‘more human than human’. This tension between what is human and what is not, contrary to appearances, is the central premise behind the Barbican’s latest exhibition AI: More than Human, raising essential questions around our perception of intelligence – artificial or otherwise.

The centrepiece of the show is a low table with inbuilt screens displaying grids peppered with black and white blobs – a digitised version of ancient strategic board game Go, believed to have originated in China some 3,000 years ago. In 2015, DeepMind, a Google-owned AI research lab based in London, made history when its AlphaGo computer program defeated a professional human at Go – the game long hailed as the “holy grail of AI”.

The central premise of Go is for each player to form territories on a 19×19 board using their black or white stones, capturing their opponent’s pieces in the process. While AI’s aptitude for board games had been apparent for over 20 years thanks to IBM supercomputer Deep Blue beating chess champion Garry Kasparov in the mid nineties, the complexity, subtlety and the sheer difficulty in evaluating Go’s board positions and moves made it a googol – ten raised to the power of 100, or 1 followed by 100 zeroes – times more challenging than chess. There are, it’s been calculated, fewer atoms in the universe than Go board configurations, and experts believed it would least another decade before a computer could defeat a human at Go.

The i newsletter latest news and analysis Email address is invalid Email address is invalid Thank you for subscribing! Sorry, there was a problem with your subscription.

Consequently, when three-time European Go champion Fan Hui sat down to play AlphaGo in DeepMind’s London headquarters in October 2015, he was convinced he would win. Instead he lost five games out of five over four days, shocked at the calm and stable way the program was dismantling his technique.“I know AlphaGo is a computer, but if no one told me, maybe I would think the player was a little strange, but a very strong player, a real person,” he said at the time.

DeepMind had trained AlphaGo by showing it many strong amateur games of Go to develop its understanding of how a human plays before challenging it to play versions of itself thousands of times, a novel form of reinforcement learning which had given it the ability to rival an expert human. History had been made, and centuries of received learning overturned in the process. The program was free to learn the game for itself.

Three and a half years on, Fan Hui appears happy to relive what he previously described as a “very difficult” time. Born in China in 1981, he moved to France in 2000 and became the European Go champion in 2013, a title he retained for another three years before retiring from competitive playing the year after his loss. “After I played AlphaGo, everything changed,” he tells i, enthusiastically. “I won the next tournament in Europe I played – very, I feel, easily, and played another international tournament, but it was difficult because I was playing very young players at a strong level.”

“When people ask about how I felt when I lost to AlphaGo, I think it’s difficult for them to understand how much that experience changed the game for me. Before I played AlphaGo, I felt like I knew the world, like I knew many things. But afterwards, everything changed. Now I really believe everything is possible, because I saw something that people thought was impossible.”

At 37, he’s now focused on teaching the game to others and being a husband and father, while the triumphant AlphaGo went on to beat the world champion Go player Lee Sedol in March the following year 4-1 and spawned three new, ever-more powerful versions, AlphaGo Master, AlphaGo Zero and AlphaZero. DeepMind has cited its ultimate goal is to “solve intelligence” before using intelligence “to solve everything else”, and has since moved into healthcare and diagnosis through partnerships with both the NHS and Moorfields Eye Hospital.

“Go is an extraordinary game but it represents what we can do with AI in all kinds of other spheres,” says Murray Shanahan, professor of cognitive robotics at Imperial College London and senior research scientist at DeepMind, says. “In just the same way there are all kinds of realms of possibility within Go that have not been discovered, we could never have imagined the potential for discovering drugs and other materials.”

While some in the Go community were troubled by the arrival of AlphaGo and its implications for the future of the game, Fan Hui has come to see AI as a useful assistant, a tool to learn from. “In Go culture, when we play and someone wins and someone loses, we never say: ‘Okay, good game, we’re finished.’ One thing we all do, everyone, even the world champion, is review the game together. We realise we know only about 5 per cent of the game, we know nothing,” he explains.

The program opened the community’s eyes to moves they never would have considered before, including ones their teachers may have discouraged them from making in the past, something he concedes is a “huge change” in a game which has existed for millennia. But if the sole aim of Go was to determine the stronger player, it never would have survived. Go, he maintains, helps players to find something inside themselves: an essential part of humanity uncovered by something which is not.

“The day before [I lost] they used to think an AI could never play a game with a professional Go player,” he says. “I can see the world is so much bigger than I thought before, and I really like this feeling. It changed me.”

‘AI: More than Human’, Barbican Centre, London, to 26 August (020 7638 8891)