The AI agents started out doing random movements, but after playing again and again and again, they eventually figured out the strategies and techniques that work. DeepMind researcher Wojciech Czarnecki told NYT that they can even "adapt to teammates with arbitrary skills." The result? According to the paper, the agents successfully competed against human players even when their reaction times were slowed to match their flesh-and-blood opponents. They "clearly exceeded the win-rate of humans in maps that neither agent nor human had seen previously," the paper reads. That's a huge win for DeepMind, seeing as it's much harder to train an AI for tasks that require individuals to work in groups. And that's an issue the company has to find a solution for to be able to develop automated systems for the real world.

As The New York Times noted, the skills DeepMind's AI picks up from complex games like StarCraft II could be used by warehouse robots that need to work together. They could also lead to a solution that would allow self-driving cars to navigate heavy traffic en masse. DeepMind still has a lot of work ahead of it, though. Georgia Tech College of Computing professor Mark Riedl told the NYT that the problem with the company's AI agents is that they're only responding to whatever's happening in the game -- they don't actually communicate with each other like what groups of humans and other animals do. That's why we wouldn't be surprised if the company takes on more multiplayer and complex video games in the future in an effort to beef up its AI's capabilities.