What would AlphaZero think of the current World Chess Championship match in London? That seemed a hypothetical question, and Alexander Grischuk even declared “AlphaZero does not exist” during one of our live broadcasts, but we’re now excited to be able to share some of its insights in a series of videos by 2-time British Champion Matthew Sadler. Together with WIM Natasha Regan he’s had privileged access to DeepMind’s general purpose artificial intelligence system while preparing an upcoming book, Game Changer: Alpha Zero’s Groundbreaking Chess Strategies and the Promise of AI.

It’s coming up on one year since the news broke that DeepMind’s AlphaZero had needed just hours of playing against itself to learn chess to a level where it could beat the reigning computer champion Stockfish. The accompanying 10 example wins by AlphaZero took the chess world by storm, and suggested a dynamic, sacrificial way of playing unburdened by the conventional “knowledge” hard-wired into engines and human brains. Jan Gustafsson was one of those who combed the games for potential insights.

Since then everything had gone quiet, but we’re now thrilled to be able to work with English GM Matthew Sadler and WIM Natasha Regan to share some World Championship insights from AlphaZero. Matthew will be familiar to chess24 members as the author of four much-loved video series , and climbed as high as world no. 14 before taking the unusual decision to quit chess at the age of 25. He took a 12-year hiatus to get a “real job”, but when he came back his infectious enthusiasm for the game helped him climb back to his current (and peak) rating of 2693. Natasha has represented England at chess, and as a Cambridge mathematics graduate has the technical background to grasp the AI details of how AlphaZero works.

In this first instalment, Matthew takes us through Games 1-8 of the World Championship match between Magnus Carlsen and Fabiano Caruana in London, which he watched while running AlphaZero on a 4TPU machine. He explains that his approach was to let AlphaZero spend one minute in each position, “the best balance between perfection and usability”, and then to write down the main line of its analysis.

1. The Sicilians with 2…Nc6: Games 1, 3, 5 and 8

Matthew decided to present the analysis thematically, so let’s dive straight into the first video, which covers the four games where Fabiano opened 1.e4 and Magnus responded with 2…Nc6 Sicilians:

Don’t miss the full video, including a deep explanation of the different plan AlphaZero would have adopted with White in Game 1, but here are a few highlights:

1. If Fabiano had played 15.Rxa5! in Game 3 (a move he said he regretted not playing immediately after his 15.Bd2), AlphaZero gave him as high as an 80% expected score (AlphaZero doesn’t work in pawn scores as other engines do). Matthew comments:

AlphaZero loves positions in which it feels that it has all the play and the opponent only has to sit and wait for things to happen, because AlphaZero is extremely good at optimising its play.

2. The 6.b4 “Wing Gambit” in Game 5 is AlphaZero’s second choice after 6.Bxc6.

3. 7.Nd5 in the Sveshnikov gets AlphaZero’s vote of approval. As Matthew comments:

Why is it interesting to talk about that? The point is that AlphaZero is a self-learning machine. It’s taught itself chess within 9 hours and 44 million self-play games and it’s got a very definite view on how to play chess and which positions are good and which are not. So when AlphaZero says, “yes, I like this opening move,” there’s some real basis to it - there’s some real thought, some real practice that’s gone into it.

4. Fabiano’s 23.Rad1 in Game 8 came in for some criticism afterwards, but it's actually AlphaZero’s choice in that position. The computer agrees with everyone else, however, that 24.h3? was a mistake. It evaluated 24.Qh5! as a 93% expected score for the challenger!

5. What does AlphaZero add to analysis?

It’s the way it’s so quick to find clear, concrete plans, and the way those plans are reflected in all of its lines. It’s consistent.

2. The Queen’s Gambit Declined: Games 2 and 7

Matthew notes that the quieter Queen’s Gambit Declined, as played by Fabiano in these two games, is an opening favoured by AlphaZero, which also approves of the line chosen by Magnus with 5.Bf4 (and not the previously popular 5.Bg5). It suggests White’s expected score if Magnus had dared to reply 11.Nd2 to the surprise 10…Rd8 in Game 2 were 65-70%, though as Matthew cautions, that’s closer to a draw than to a win. He explains that, “AlphaZero was cheering all the way for Fabiano,” and largely approved of his play in both of these games.

3. The English Opening and the Petroff: Games 4 and 6

In the Reversed Sicilian in Game 4 Matthew notes that 6…Bc5, first introduced at the top-level after some deep computer checking by Alexander Grischuk, is also AlphaZero’s choice, but DeepMind’s creation doesn’t only give the stamp of approval to known moves. Matthew points out that the never-played 9…Nb6 is actually its choice on move 9 rather than Fabi’s 9…Nxc3. It agrees with Garry Kasparov and everyone else that 15.b5! should have been the follow-up after 14.a4 by the World Champion. Later Black was absolutely fine, and Matthew was taken aback by some of AlphaZero’s suggestions:

It’s not a huge calculating machine – it’s calculating a lot less than any of the other engines, but once it finds its path where it wants to go, the energy and the power that it puts into prosecuting that plan is enormous, and you really can hear the wind flowing through your hair (well I can't, but you probably can!) as this enormous rush of attack blows through. It's really, really impressive.

In Game 6 AlphaZero approved of how Fabiano, “understood the dynamic potential of his position” in a quiet Petroff that developed into a thriller. Matthew was impressed by the schematic way AlphaZero would have approached the ending, but did note an Achilles’ heel:

I’m afraid AlphaZero did not see that mate-in-36. To be honest, looking at AlphaZero’s big strengths, and also the fact that it’s not using endgame tablebases but doing it all by itself, it doesn’t surprise me too much, but it’s worth mentioning that this is something that it didn’t actually find.

You can play through all of Matthew's Alpha-Zero powered analysis in the viewer below (you can also download the pgn file):

Matthew will also be bringing us AlphaZero’s thoughts on the remaining games of the match, while it looks like we’re going to be hearing a lot more from AlphaZero in the coming year. Game Changer, the book on AlphaZero by Matthew Sadler and Natasha Regan, has a planned release date of early 2019.



With thanks to DeepMind for their support.

See also: