ABSTRACT

The Atari 2600 video game console provides an environment for investigating the ability to build artificial agent behaviours for a variety of games using a common interface. Such a task has received attention for addressing issues such as: 1) operation directly from a high-dimensional game screen; and 2) partial observability of state. However, a general theme has been to assume a common machine learning algorithm, but completely retrain the model for each game title. Success in this respect implies that agent behaviours can be identified without hand crafting game specific attributes/actions. This work advances current state-of-the-art by evolving solutions to play multiple titles from the same run. We demonstrate that in evolving solutions to multiple game titles, agent behaviours for an individual game as well as single agents capable of playing all games emerge from the same evolutionary run. Moreover, the computational cost is no more than that used for building solutions for a single title. Finally, while generally matching the skill level of controllers from neuro-evolution/deep learning, the genetic programming solutions evolved here are several orders of magnitude simpler, resulting in real-time operation at a fraction of the cost.