“They can adapt to teammates with arbitrary skills,” said Wojciech Czarnecki, a researcher with DeepMind, a lab owned by the same parent company as Google.

Through thousands of hours of game play, the agents learned very particular skills, like racing toward the opponent’s home base when a teammate was on the verge of capturing a flag. As human players know, the moment the opposing flag is brought to one’s home base, a new flag appears at the opposing base, ripe for the taking.

[Like the Science Times page on Facebook. | Sign up for the Science Times newsletter.]

DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.

For instance, such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic. “Games have always been a benchmark for A.I.,” said Greg Brockman, who oversees similar research at OpenAI, a lab based in San Francisco. “If you can’t solve games, you can’t expect to solve anything else.”

Until recently, building a system that could match human players in a game like Quake III did not seem possible. But over the past several years, DeepMind, OpenAI and other labs have made significant advances, thanks to a mathematical technique called “reinforcement learning,” which allows machines to learn tasks by extreme trial and error.