Alpha Zero is a more general version of AlphaGo, the program developed by DeepMind to play the board game Go. In 24 hours, Alpha Zero taught itself to play chess well enough to beat one of the best existing chess programs around.

What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory. Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value. “It’s like chess from another dimension,” Hassabis said.

Hassabis speculates that because Alpha Zero teaches itself, it benefits from not following the usual approach of assigning value to pieces and trying to minimize losses. “Maybe our conception of chess has been too limited,” he said. “It could be an important moment for chess. We can graft it into our own play.”

The game of chess has a long history in artificial intelligence. The best programs, developed and refined over decades, incorporate huge amounts of human intelligence. Although in 1996 IBM’s Deep Blue beat the world champion at the time, that program, like other conventional chess programs, required careful hand-programming.

The original AlphaGo, designed specifically for Go, was a big deal because it was capable of learning to play a game that is enormously complex and is difficult to teach, requiring an instinctive sense of board positions. AlphaGo mastered Go by ingesting thousands of example games and then practicing against another version of itself. It did this partially by training a large neural network using an approach known as reinforcement learning, which is modeled on the way animals seem to learn (see “Google’s AI Masters Go a Decade Earlier Than Expected”).

DeepMind has since demonstrated a version of the program, called AlphaGo Zero, that learns without any example games, instead relying purely on self-play (see “AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help”). Alpha Zero improves further still by showing that the same program can master three different types of board games.

Alpha Zero’s achievements are impressive, but it still needs to play many more practice games than a human chess master. Hassabis says this may be because humans benefit from other forms of learning, such as reading about how to play the game and watching other people play.

Still, some experts caution that the program’s capabilities, while remarkable, should be taken in context. Speaking after Hassabis, Gary Marcus, a professor at NYU, said that a great deal of human knowledge went into building Alpha Zero. And he suggests that human intelligence seems to involve some innate capabilities, such as an intuitive ability to develop language.

Josh Tenenbaum, a professor at MIT who studies human intelligence, said that if we want to develop real, human-level artificial intelligence, we should study the flexibility and creativity that humans exhibit. He pointed, among other examples, to the intelligence of Hassabis and his colleagues in devising, designing, and building the program in the first place. “That’s almost as impressive as a queen in the corner,” he quipped.