Usually, when we get AI systems to watch video games, we expect them to play the games afterward. That’s how computers have beaten everything from the board game Go to various Atari titles. But a group of researchers from the Georgia Institute of Technology are trying something different: they’re getting AI to learn how video games work instead.

In a recent paper titled “Game Engine Learning from Video,” the team describes an AI system that can re-create the game engine of titles like Super Mario Bros. just by watching it being played. The system doesn’t have access to the code; it just looks at the pixels and learns. The re-creations it makes are glitchy, but passable.

It’s a first in the world of AI video gaming, but there are important caveats and limitations for the research. For a start, the AI system isn’t learning everything about the game from scratch. It’s supplied with two important sets of information: first, a visual dictionary featuring all the sprites in the game; and second, a set of basic concepts, like the position of objects and their velocity, which it uses to analyze what it sees. With these tools in hand, the AI breaks down the gameplay frame-by-frame, labels what it sees, and looks for rules that explain the action.

“For each frame of the video we have a parser which goes through and collects the facts. What animation state Mario is in, for example, or what velocities things are moving at,” Matthew Guzdial, the lead author of the paper, tells The Verge. “So imagine the case where Mario is just above a Goomba in one frame, and then the next frame the Goomba Is gone. From that it comes up with the rule that when Mario is just above the Goomba and his velocity is negative, the Goomba disappears.”

Over time, the system builds up all small rules, recording them as a series of logic statements (e.g., if this, then that) and combining them to approximate the game engine. These rules can be exported and converted into a number of programming languages which are used to re-create the game itself.

Right now, the system is limited to working on 2D platformers. That’s because it relies on humans to define what can happen in any particular game. (A term known as “action states.”) Defining all this information for a 3D game would take a lot more time, as well as more advanced machine vision tools.

In the future, though, the team from Georgia Tech thinks technology like this could be used to work out not only how video games work — but real life, too. This would take a number of breakthroughs in the capacity of AI to comprehend the world as humans understand it. (And, of course, that’s infinitely more complex than Super Mario Bros.) But it’s not an impossible idea. “I do think a future version of this could [analyze] limited domains of reality,” says Guzdial. Right now, though, they’re concentrating on Mega Man.