TL;DR: AlphaStar is the first AI to reach the top league of a widely popular esport without any game restrictions. This January , a preliminary version of AlphaStar challenged two of the world's top players in StarCraft II, one of the most enduring and popular real-time strategy video games of all time. Since then, we have taken on a much greater challenge: playing the full game at a Grandmaster level under professionally approved conditions .

Our new research differs from prior work in several key regards:

AlphaStar now has the same kind of constraints that humans play under – including viewing the world through a camera, and stronger limits on the frequency of its actions* (in collaboration with StarCraft professional Dario “ TLO” Wünsch ). AlphaStar can now play in one-on-one matches as and against Protoss, Terran, and Zerg – the three races present in StarCraft II. Each of the Protoss, Terran, and Zerg agents is a single neural network. The League training is fully automated, and starts only with agents trained by supervised learning, rather than from previously trained agents from past experiments . AlphaStar played on the official game server, Battle.net , using the same maps and conditions as human players. All game replays are available here .

We chose to use general-purpose machine learning techniques – including neural networks, self-play via reinforcement learning, multi-agent learning, and imitation learning – to learn directly from game data with general purpose techniques. Using the advances described in our Nature paper, AlphaStar was ranked above 99.8% of active players on Battle.net, and achieved a Grandmaster level for all three StarCraft II races: Protoss, Terran, and Zerg. We expect these methods could be applied to many other domains.

Learning-based systems and self-play are elegant research concepts which have facilitated remarkable advances in artificial intelligence. In 1992, researchers at IBM developed TD-Gammon, combining a learning-based system with a neural network to play the game of backgammon. Instead of playing according to hard-coded rules or heuristics, TD-Gammon was designed to use reinforcement learning to figure out, through trial-and-error, how to play the game in a way that maximises its probability of winning. Its developers used the notion of self-play to make the system more robust: by playing against versions of itself, the system grew increasingly proficient at the game. When combined, the notions of learning-based systems and self-play provide a powerful paradigm of open-ended learning.

Many advances since then have demonstrated that these approaches can be scaled to progressively challenging domains. For example, AlphaGo and AlphaZero established that it was possible for a system to learn to achieve superhuman performance at Go, chess, and shogi, and OpenAI Five and DeepMind’s FTW demonstrated the power of self-play in the modern games of Dota 2 and Quake III.

At DeepMind, we’re interested in understanding the potential – and limitations – of open-ended learning, which enables us to develop robust and flexible agents that can cope with complex, real-world domains. Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales.