Playing for Data: Ground Truth from Computer Games Stephan Richter*1 Vibhav Vineet*2 Stefan Roth1 Vladlen Koltun2 1TU Darmstadt 2Intel Labs * authors contributed equally

Abstract Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just 1/3 of the CamVid training set outperform models trained on the complete CamVid training set.

Video

Code The code for extracting data from games is available here: https://bitbucket.org/visinf/projects-2016-playing-for-data

Release Log 11/29/2016 - Initial code release for extracting data from games.

10/05/2016 - Added split into training/validation/test.

08/04/2016 - Initial release of dataset.