Artificial intelligence made short work of Go, a 3,000-year-old Chinese board game with more possible moves than atoms in the observable universe, so how it will fare taking on a video game classic like Doom? AI researchers are going to find out, and have announced a new challenge looking for computers that know how to handle a rocket launcher, with the best bots set to duke it out in a deathmatch later this year at the Computational Intelligence and Games (CIG) Conference.

At first glance, this might sound like a walk in the park. After all, if you've ever played a first-person-shooter against computer enemies, you'll know they can be as fast and accurate as, well, a computer. But the bots you've played will have had access to the game's inner workings — they're looking at the world like Neo in The Matrix, with perfect knowledge of maps, weapons, and the positions of other players. For the "Visual AI Doom Competition," artificially intelligent bots will only have the same information as a human: they'll see the screen in front of them, and nothing more.

"There are all sorts of video games that humans play way better than computers."

This means that the bots will have to learn about their virtual world in a manner more familiar to humans. Demis Hassabis, co-founder of the Google-owned DeepMind division that beat Go, told The Verge earlier this year that video games can actually be a greater challenge for AI for this reason. "There are obviously all sorts of video games that humans play way better than computers, like StarCraft," said Hassabis. "Strategy games require a high level of strategic capability in an imperfect information world — 'partially observed,' it’s called. The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers."

Doom might not be a strategy game like StarCraft, but it is "partially observed." AI researchers have looked into Minecraft as a testing ground for computers for similar reasons. In March, Microsoft launched an open-source platform to make it easier for bots to roam the game's virtual worlds — an area of research that will help develop AI that can control robots in the physical world. "We think this is an essential part of building this kind of general intelligence," one of the scientists involved told BBC News.

The competition will focus on "reinforcement deep learning"

The "Visual AI Doom Competition" is also encouraging competitors to create entries using an AI method known as "reinforcement deep learning." This combines regular deep learning (which mines large amounts of data to look for recurring patterns) with reinforcement learning (which trains computers in the same way you'd train a pet — giving it rewards when it does the right thing). It was reinforcement deep learning that allowed DeepMind's AlphaGo to beat the world's champion, and AI researchers tend to think it's going to be the next big opportunity.

The competition is now accepting entries for warm-up rounds with a deadline of the 31st of May. Final submissions will have to be in by August, and in September, the best bots will battle it out in a tournament at CIG in Greece. One side of the competition will have bots playing in a known map with rocket launchers only, while the other will drop them in an unknown arena with the full range of weapons and items. Just like in Doom deathmatches for humans, the player with the most frags will win.

Google's DeepMind project