SEOUL, SOUTH KOREA — After an extraordinarily close contest, Google's artificially intelligent Go-playing computer system has beaten Lee Sedol, one of the world's top players, in the first game of their historic five-game match at Seoul's Four Seasons hotel. Known as AlphaGo, this Google creation not only proved it can compete with the game's best, but also showed off its remarkable ability to learn the game on its own.

A group of Google researchers spent the last two years building AlphaGo at an AI lab in London called DeepMind. Until recently, experts assumed that another ten years would pass before a machine could beat one of the top human players at Go, a game that is exponentially more complex than chess and requires, at least among the top humans, a certain degree of intuition. But DeepMind accelerated the progress of computer Go using two complimentary forms of machine learning—techniques that allow machines to learn certain tasks by analyzing vast amounts of digital data and, in essence, practicing these tasks on their own.

The match serves as a litmus test for the progress of machine learning.

The match—which extends through next Tuesday—serves as a litmus test for the progress of machine learning. Similar AI techniques have already reinvented myriad services inside Google and other Internet giants, including the Google search engine, and they're poised to accelerate the progress of everything from scientific research to robotics.

Geordie Wood for WIRED

This morning in Seoul, today's match was front page news—quite literally—with the average Korean very much rooting for native son Lee Sedol. But there is just as much interest inside Google, and that includes some of its biggest names. Jeff Dean, one of the company's most important engineers, is in Seoul for at least the first game. He delivered speech this morning for the local press on the progress of machine learning inside Google, and just afterwards, Google chairman and former CEO Eric Schmidt sat down for lunch with a handful of reporters at the Four Seasons alongside Demis Hassabis, the CEO of DeepMind. Both carried a copy of The Korean Herald, whose front page carried a photo of Hassabis and Lee Sedol—above the fold.

"I expected it to be big," Hassabis told us. "But not that big."

'Difficult Fight'

Hassabis left the lunch early without taking a bite, saying he was needed as his DeepMind team made the final preparations for the match. Schmidt followed about thirty minutes later. As the match was set to begin, both turned up just outside the match room, trailed by a small mob of TV and print photographers. Apparently, two Korean senators also arrived just before this initial game. "This is a lot more attention than Go usually gets," said one of the match's English language commentators, Michael Redmond. And Go is enormously popular in Korea. An estimated 8 million Koreans play the game, which is played on a 19-by-19 grid with small black and white stones.

Lee Sedol and AlphaGo's operator, DeepMind researcher Aja Huang, played the game in a small, closed room alongside a handful of officials. The press watched from two separate commentary rooms, one for Korean speakers and one for English. Sedol played black and AlphaGo white, which meant Sedol made the first move, making a fairly common opening—and one that was only slightly different from the opening played by three-time European Go Fan Hui during his closed-door match with AlphaGo this past October. AlphaGo won that match five games to nil.

According to Michael Redmond, the English language commentator and a professional Go player who was born in the US, Lee Sedol's opening was an aggressive one. The Korean is known for his aggressive and fast-moving style of play. "He starts early in his fight," Redmond said. But AlphaGo responded with a game of "balance"—a relatively peaceful game, as Redmond described it. This was consistent with the way the machine played European champion Fan Hui in October.

But about 12 moves into the match, AlphaGo went on the offensive as well. "Lee Sedol invited the fight," Redmond said, "but AlphaGo did not back away from it." And the match continued apace. Redmond said he did not see any precedent for this in the match with Fan Hui. "The fight is getting really complicated," he said. "This is actually the first time I have seen AlphaGo play a game that has this difficult of a fight."

Rapid Rate of Play

Redmond's commentary was illuminating, but his view of AlphaGo also showed just how new—and indeed, how mysterious—the machine's approach really is. Redmond kept referring to the AlphaGo "database," but unlike past Go systems, the system relies much more on machine learning than on a pre-set list of moves. Part of the attraction of this match is that, before today's game, no one was quite sure how well AlphaGo would perform because it has spent the last five months essentially teaching itself to play the game at a higher level.

Geordie Wood for WIRED

In October, though it soundly beat Fan Hui, AlphaGo was not good enough to beat someone like Lee Sedol. Fan Hui is ranked 633rd in the world, while Lee Sedol is ranked number five and widely regarded as the top player of the last decade. But over the last five months, using a technology called reinforcement learning, AlphaGo essentially played game after game again against itself as a way of improving its skills.

Clearly, the system has improved its play a great deal. At the lunch prior to the match, Hassabis also said that since October, he and his team had also used machine learning techniques to improve AlphaGo's ability to manage time. In the early to middle part of the game, it matched Lee Sedol with a rapid rate of play. "Both of them are playing fairly quickly," Redmond said.

'A Scary Variation'

Lee Sedol took an (allowed) break about an hour-and-a-half into the game as his clock continued to run. And then the match returned to what commentator Chris Garlock called "a little bit more of a ballet." Redmond said that AlphaGo was planning very much like a human professional, trying to reinforce its weaknesses—that is, its vulnerable groups of stones. "That is a pattern it has always had—the same as a really good Go player," he said, referring to AlphaGo's match with Fan Hui. "That is: making strong moves to reinforce weak groups—and potentially create weak groups [for its opponent]."

Then, at the two hour mark, AlphaGo made another particularly aggressive move, and Garlock said he was nervous—for Lee Sedol. "It just looks scary," he said. And to a certain extent, Redmond agreed. "It's a scary variation. Black has to be careful," he said, referring to Lee Sedol. He was also impressed that AlphaGo was avoiding mistakes of its own. During the match with Fan Hui, Redmond said, AlphaGo made a number of fundamental errors, but this did not really happen in the early to middle part of today's game.

Twenty minutes later, Redmond said that Lee Sedol could not survive by playing "peacefully." He needed to attack on the right side of the board. But many other parts of board were very much up for grabs. Garlock and Redmond agreed that the match was very much in the balance.

The End Game

As the two players entered the end game, at the two-hour-and-forty-minute mark, the contest remained on a knife edge. Garlock and Redmond loosely tallied the number of points available to each player in various parts of the board, deciding that the match was still too close to call. But Garlock said that this could favor AlphaGo, because its strength is in "calculation." There is some truth to this. AlphaGo uses its machine learning techniques to narrow down the scope of potentially advantageous moves, but then it uses what's called a tree search to examine the possible outcomes of those moves.

Regardless, the machine continued to play at an enormously high level. "It's more than I hoped for," Redmond said. And, yes, the two commentators continually referred to AlphaGo as "he."

As the game approached its conclusion, AlphaGo began using more and more of its available time (each player has 2 hours of unrestricted play, and then, basically, they must make all subsequent moves in less than 60 seconds). But as his clock dropped to around 34 minutes, Lee Sedol seemed to show the first signs of frustration, turning in his chair, wincing, and putting his hand to the back of his head. Then, about six minutes later, Redmond said: "I don't think it's gonna be that close."

Indeed, at the three-hour-and-thirty-minute mark, Lee Sedol resigned.

Remond called the result "a big surprise," saying he had not expected a win for Google and AlphaGo. Of course, this was only the first of five games. The next is tomorrow at 1pm Seoul time, followed by a rest day. Game three is scheduled for Saturday. Whatever the ultimate outcome of the match, AlphaGo has proven its worth. And perhaps more importantly, it has proven that it can improve by leaps and bounds—mostly on its own. As Redmond said of AlphaGo, well before today's match was over: "It's already a success."