If we want to create artificial intelligence that can teach itself how the world works, it needs to be curious. This has been a recurring theme in the world of AI in recent years, and newly published research from Google's DeepMind division shows exactly why this quintessentially human quality is important for making computers smart.

Curiosity means rewarding the AI agent's exploration

In the video above you can see DeepMind's AI agent tackling the infamously difficult Atari game Montezuma's Revenge. Unlike bots playing Unreal Tournament or StarCraft, the agent doesn't have access to all the information in the game, but is learning to play the same way humans do — by looking at the screen, pushing buttons, and seeing what works. If this setup sounds familiar, it's because last February DeepMind unveiled an earlier iteration of the same agent, but when that bot tried to take on Montezuma's Revenge, it couldn't score a single point. But as the video above shows, now DeepMind's AI is dodging skulls, grabbing keys, and scoring points like a pro. The difference? Curiosity.

Curiosity, in this case, is a mechanic known as "intrinsic motivation." Essentially, this means creating a reward system for the AI agent, something comparable to the brain's "pleasure chemicals." DeepMind's scientists then connected these digital rewards to the agent's exploration system, giving the desire to look around its surroundings — or curiosity.

As the research paper explains, the new agent is much more effective than previous iterations, exploring far more of the trap-filled rooms of Montezuma's Revenge. "After 50 million frames, the agent using exploration bonuses has seen a total of 15 rooms, while the no-bonus agent has seen two," write the researchers. But of course, while this research is exciting, it's not the same as creating an artificial intelligence that navigate the real world. While an AI-controlled robot navigating your house, for example, is less likely to encounter floating skulls than in a video game, it's just as likely to fall off a ledge — and in real life, you don't get unlimited lives.

Google's DeepMind has mastered the game of "Go"