It was in 1997 on the 35th floor of a Midtown Manhattan skyscraper. Chess master Garry Kasparov ambled off stage in disbelief, arms up in defeat, having just lost to a computer. The famed dethroning of the reigning chess world champion by IBM’s Deep Blue computer signaled a brave new world of computer intelligence—of machines overtaking humanity.

Over 20 years on artificial intelligence has barreled ahead. Whereas Deep Blue took down Kasparov via sheer computing power, newer computer technologies actually learn and deduce solutions on their own. And the latest research by the AI company DeepMind (owned by Alphabet, Google’s parent company) has just taken the field another step forward.

Published today in Science, DeepMind’s AlphaZero system has demonstrated superhuman success at not just chess but also shogi—aka “Japanese chess”—and go, an ancient Chinese board game with a staggering number of move possibilities (around 300 times that of chess). It is technology that once fully developed could have a wide range of uses—from drug development to mathematics to material design.

Many prior game-playing technologies initially required information provided by humans—they must be prepped to handle a specific task. Yet the AlphaZero algorithm learns how to “play” games on its own. It does so via reinforcement learning, the concept of a machine learning about an interactive environment through trial, error and reward. In the new research AlphaZero played around 60 million games against itself to reinforce its “understanding” of the rules.

It was then able to hang in with leading chess program Stockfish—which for humans is nearly impossible to beat—having won 155 out of 1,000 games, losing just six and drawing the rest. AlphaZero also bested Elmo, the world champion shogi algorithm, 91 percent of the time, and took down AlphaGo—an earlier version of itself designed specifically to play o—in 61 percent of games played.

A major advancement here shows AlphaZero is not limited to just one function like previous game-playing technologies. DeepMind appears to have developed an algorithm that can master many if not most board games with fixed rules. “We are very excited that we have a program that completely learns these games without [the help] of human knowledge,” says lead AlphaZero engineer Julian Schrittwieser. “Generally speaking, it is an algorithm trying to solve complex, multistep problems.”

AlphaZero’s extraordinary computing ability is in part made possible by employing 5,000 of what are called tensor processing units, or TPUs. Developed by Google over the past few years, TPUs are microprocessors designed specifically to enable the processing of artificial intelligence algorithms. In the new study the processers drove the self-play that resulted in machine learning. “It certainly is cool that a generalized-learning algorithm has learned to play various board games without encoding a lot of knowledge about the particular game,” says Daylen Yang, a computer engineer and contributor to Stockfish who was not involved in the DeepMind research. “AlphaZero shows that it can learn that knowledge automatically—at least if you have Google’s 5,000 TPUs, which is a lot of computing!”

Modern computer science really began with the game of chess. Pioneers like Alan Turing and Claude Shannon were developing algorithms to fell kings, knights and queens since the field’s inception in the 1940s. “Chess subsequently became a grand challenge task for a generation of artificial intelligence researchers,” the DeepMind authors wrote.

In a commentary on DeepMind’s work accompanying the new paper, IBM computer scientist Murray Campbell wrote board games are a logical starting point for AI. All of the information needed to play is visible to the players and therefore easier to analyze than, say, poker, in which players are blinded to some of their opponents’ cards.

Still, progress is being made in card games as well. Recently two separate research groups reported developing algorithms capable of beating professional poker players at no-limit Texas hold’em. Another challenge to AI researchers will be multiplayer video games. Researchers from DeepMind and elsewhere are currently working on algorithms to tackle games such as StarCraft II—with multiple players interacting within a large, only partially observable physical space simulating real-world scenarios.

Schrittwieser brims with optimism about prospects for steadily advancing AI technology. “We want to look at applications in science and medicine. Maybe we have a set of molecules and need to figure out how they need to interact to develop a new medication,” he envisions. “Or maybe a mathematician has a theory and our algorithm helps them through a sequence of steps to arrive at a proof.”

Sign up for Scientific American’s free newsletters. Sign Up

Not unlike the recent backlash against a scientist in China claiming to have edited the genomes of human embryos, advances in AI come with a certain unease. Short of Elon Musk’s dire warnings about computer learning creating “immortal dictators” and fostering human irrelevancy, many in computer science including Schrittwieser agree the field should proceed with caution and transparency. “We’re facing smart machines with great caution,” he says. “It’s no different than any other industry. We have committees that include people from companies like DeepMind, Google and Facebook to ensure the ethics of AI.”

Like gene editing, the pursuit of computer-learning systems appears inevitable. And for the time being it seems humans can avoid a machine-imposed checkmate. “I see it much more as a tool for humans to use—to help them figure out their tasks,” Schrittwieser says. “For now, it is inspiring new moves for chess players.”