DeepMind's AI playing computer game "Labyrinth." YouTube/DeepMind Not content with mastering centuries-old Chinese board game Go, DeepMind has now pitched an artificial intelligence (AI) agent against a more contemporary game called "Labyrinth."

The Google-owned company, which is based in King's Cross in London, published a video on YouTube showing an AI navigating a computer game with a 3D-maze that looks like a level from the 90s shooter game "Doom."

In the game, the AI is rewarded for finding apples and portals that teleport it to somewhere else in the maze. The AI has to score as many points as possible in a minute.

"This task is much more challenging than [a driving game] because the agent is faced with a new maze in each episode and must learn a general strategy for exploring mazes," DeepMind said in a paper published last week by eight of the company's most prominent academics.

The authors explain that the AI successfully learned a "reasonable strategy for exploring random 3D mazes using only a visual input."

Unlike other AIs that play games independently, this particular AI had no access to the game's internal code, according to New Scientist. That means the AI had to learn the game in the same way that a human would — by looking at the screen and deciding how to move forward from there.

Changing AI tactics

Last year the DeepMind team created an AI capable of learning and playing 49 different games from the Atari 2600 — a gaming console from the 1980s. The AI, which wasn't told the rules of the games and instead had to watch the screen to develop its own strategies, beat the best human scores on 23 of 49 Atari games.

Mastering the Atari games involved using a technique called reinforcement learning, which rewards the AI for taking steps that boost its score, in conjunction with a deep neural network that analyses and learns patterns on the game screen. The AI also used a technique called experience replay, meaning it could look back into its memory and study the outcome of past scenarios.

However, experience replay is hard to scale up to more advanced problems, according to DeepMind's latest paper.

To overcome this issue, DeepMind used a technique called asynchronous reinforcement learning, which involves multiple versions of an AI working together to tackle a problem and compare their experiences.

This approach requires less computing power, according to New Scientist. The AI that beat the Atari games required eight days of training on high spec machines. The new AI achieved better performance on lower spec systems in four days.