The following blog post, unless otherwise noted, was written by a member of Gamasutras community.

The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

AI Research in RTS games has a rich history. For over a decade, researchers have been working on building bots capable of defeating the best human players, but still have a long way to go. On Friday, DeepMind and Blizzard announced a collaboration which will result in an open API for StarCraft 2 AI research next year.

The goal of this article is to cover some of the developments in RTS games and AI research that may have contributed to this outcome. For a more exhaustive look at research in RTS games, check out the survey articles by Ontañon et al. and Robertson and Watson

Over the past decade, researchers have made a transition from investigating different AI techniques on RTS games in isolation, to collaborations and competitions on much more complex games where different techniques are matched up head-to-head. For this work to be successful, then following conditions are necessary:

Open APIs for researchers to build and evaluate bots

for researchers to build and evaluate bots Competitions to enable researchers to compare different techniques

to enable researchers to compare different techniques Replays for learning algorithms to use for training

for learning algorithms to use for training Human Opponents to evaluate performance of bots

Most of these conditions were met with the release of the Brood War API in 2009, but the closed nature of the platform made it challenging for researchers to automate the process of training AI systems. With the announcement of an open StarCraft II environment, researchers will have a great opportunity to develop systems capable of expert-level performance in an RTS game. Here are some of the events that I identify as significant in moving towards this goal. If there are additional significance events I should include, please leave a note in the comments section.

1998

StarCraft 1 Released

The Original StarCraft was released in March of 1998 and the Expansion Pack, Brood War, was released in November of the same year. StarCraft became a worldwide hit and sparked a professional gaming scene in South Korea.

Freecraft

Before there was StarCraft, there was Warcraft II which released in 1995. A clone of Warcraft II was first released in 1998 under the name Freecraft, and was later renamed to Wargus. The clone was build on the Stratagus game engine. Freecraft was an important project for RTS AI research, because much of the initial work used Wargus as a testbed.

2001

Academic Interest in Game AI

One of the seminal articles on game AI was John Laird and Michael van Lent’s article Human-Level AI’s Killer Application Interactive Computer Games published in a 2001 issue of AI Magazine. This was a significant article, because it was one of the first publications by AAAI that recognized real-time games as an excellent environment for AI research. It also helped change the mentality of academic researchers from trying to apply existing approaches to games, and instead think about building new and specialized approaches for games.

2002

Warcraft III Released

One of the great features that came with Warcraft III was a highly-extensible map editor, which was used to create unique content like the initial version of DOTA. The map editor also had some scripting capabilities that could be used to author custom AI. However, AI authored in this manner was limited to a single map and the scripting language provided only a subset of commands to authors. Some researchers were able to implement their AI techniques within this framework, but it was not possible to test different bots against each other.

2003

RTS Games Proposed as an AI Testbed

Michael Buro and Timothy Furtak published an article in 2003 claiming that RTS games presented many novel problems that need to be addressed in order to build human-level AI. They also proposed the development of an open-source RTS game engine that could be used by the AI research community.

2004

ORTS is Released

The following year, Michael Buro released the first version of the Open-Source RTS engine, ORTS. The game had the capability to run in both graphical and non-graphical modes, which enabled bots to quickly train on thousands of game sessions. One of the main challenges with using ORTS was that while there was an interface for human players, there were not expert human players for the game that bots could be evaluated against.

First Wargus Research

Freecraft was rebranded as Wargus, which used the Stratagus game engine. One of the main advantages of Wargus was that it was open source, which opened up the platform for researchers to use any techniques that they wanted to explore. Some of the challenges with Wargus were the lack of replays for analysis, the lack of an active player base, and limited networking code for testing bots against each other. Marc Ponsen was one of the first researchers to publish an article on using Wargus as an AI testbed.

TIELT Proposed

One of the AI projects proposed in 2004 was the TIELT system, which is an AI framework that provided a consistent API to multiple game titles. One of the goals of the system was to enable researchers to build AI for one game and transfer the learned knowledge to a new game. For example, domain knowledge learned in Wargus might be applicable to other RTS games. I’m including TIELT in this discussion, because one of the outcomes of the DeepMind and Blizzard collaboration will be an API and potentially example data sets. It’s important for this API to not make assumptions about how the AI will operate. One of the challenges with TIELT is that it did not provide direct access to the game state, which limited the number of AI techniques that could take advantage of it.

2005

Reinforcement Learning in Wargus

Wargus quickly became adopted as the environment for researchers to build RTS AI. In 2005, researchers started exploring techniques such as reinforcement learning, which is one of the strategies utilized by AlphaGo. Work in Wargus eventually began to stagnate, because different researchers couldn’t evaluate work against each other, and instead relied on performance versus a small collection of hard-coded scripts.

2006

First ORTS Competition

In 2005, the first ORTS AI competition was announced, and the event was held at the AIIDE 2006 conference in Stanford. The first competition had 4 entries and the size of the competition grew each year until it ended in 2009.

2007

ICCup Launched

The International Cyber Cup is a third-party hosted ladder server for StarCraft. This server was important for AI research, because you could run bots on this server, but not the official Blizzard servers. Another benefit of this server is that it provided players with a letter-score that they can use to easily communicate their skills. For example, I was at best a D+ player when I played seriously in 2010.

2008

First StarCraft AI Research

While there was prior publication on StarCraft, the first article that I’m aware of that focused on building AI for StarCraft was published in 2008. Hsieh and Sun built a model for predicting which structures and units a player is producing by mining thousands of replays.

2009

Brood War API Released

In 2009, I discovered a Google Code project called BWAPI (Brood War API), which provided programmatic access to StarCraft. The library worked by using a third-party DDL tool to inject the API into the StarCraft runtime and exposed a set of hooks for calling in-game functions. Since then, the project has grown in contributor size and been ported to several languages. It is now hosted on GitHub and has a Java version.

2010

StarCraft II Released

In 2010, StarCraft II Wings of Liberty was released, and the competitive Brood War scene continued to be active for a few years. The expansion pack Heart of the Swarm released in 2013 and Legacy of the Void was released in 2015.

First StarCraft Competition

The first StarCraft AI competition was held at AIIDE 2010. The main event was won by the Berkeley Overmind team. The competition also featured a man-vs-machine exhibition match, in which the human player easily defeated the AI opponent.

2011

Second AIIDE StarCraft Competition

Dave Churchill of University of Alberta took ownership of the second and following iterations of the AIIDE StarCraft competitions. He wrote a tournament framework for automating running the tournament, and changed some of the rules to foster collaboration, such as requiring submissions to be open source.

Student StarCraft AI Tournament

A second StarCraft AI tournament was started, with a focus on student submissions. The tournament was not associated with an annual conference, and ran several tournaments per year.

2013

StarCraft BroodWar Bots Ladder

Krasi0 developed a ladder system for StarCraft bots that is now running 24/7. This provides an environment for researchers to evaluate different AI approaches.

2014

Starcraft 2 Automated Player

Matt Webcorner demonstrated that a StarCraft II bot could be built by intercepting DirectX commands in order to infer the game state. One of the main limitations of this approach is that the bot only has access to the game state currently being displayed on the screen.

2016

AlphaGo defeats Lee Sedol

In March, DeepMind’s AlphaGo system defeated Go world champion Lee Sedol. After the victory, many people expected StarCraft to be the next challenge for DeepMind to attempt.

Facebook Joins In

Earlier this year, AI researchers from Facebook began using StarCraft as a reinforcement learning testbed. They presented an article on micromanagement in StarCraft.

BlizzCon Announcement

At BlizzCon on Friday, DeepMind announced that they were collaborating with Blizzard on an open framework for AI. The competition now heats up with Google and Facebook competing to build a bot capable of professional StarCraft gameplay.