(Want to play the game or see the code? Click here)



When I read Ender’s Game, one of the parts that most stuck with me was the delightfully creepy Mind Game–a game designed to probe the player’s subconcious. In the book, we learn that the Mind Game isn’t actually programmed–it’s powered by an AI that makes up the gameplay as it goes, reacting to the player’s decisions and getting progressively more surreal.





Ender’s Game was written in 1985, and since then AI has actually gotten pretty good at creative tasks since then. The real-time graphics of something like the Mind Game are probably still out of reach, but what about something simpler like a text adventure?



I decided to try writing a game like this by training GPT-2, a state-of-the-art predictive text model, on some transcripts of classic text advetnures. These look something like this:



YOU STAND IN FRONT OF THE DOOR

>open door

THE DOOR OPENS TO REVEAL A CLUTTERED OFFICE.

GPT-2 is a predictive text model, which just means that it tries to predict what comes next after some text that you enter. That means if you give it

>Open door

It will try to predict what happens next, based on it’s training data. Let the user choose their next action based on the response, and you have the makings of a text adventure game.



This makes for a playable, if thoroughly weird, text adventure



YOU ARE STANDING IN A GRASSY FIELD WITH A PATH ALONG THE STREAM, AND A SMALL CRACK ACROSS THE FLOOR AT THE BOTTOM OF A SMALL PIT

>follow path

THE PATH HEADS NORTH-SOUTH HERE, AND APPEARS TO BE A STRAIGHT PATH

>continue

ROAD NEAR BRIDGE

cross the bridge

CROSS THE BRIDGE AND YOU WILL FALL INTO A PIT

>fine,go into the pit

FINE,GO INTO THE PIT

>Look around

LOOK AROUND, AND STRANGE SHAPES (APPARENTLY MADE BY AN ARTIST WHO NEVER WAS IN THE ROOM) ARE SEEN ALL OVER

>talk to the shapes

ARGUE WITH THE SHAPEK FAN, WHICH APPEARS TO BE HANDS-FREE. THE CHEER GOES UP! THE HEAVENS ARE NOW WIDE ENOUGH TO ALLOW FOR SINGING

>I’m scared

I’M SCARED THAT YOU HAVE DONE SOMETHING DELIBERATELY

>Get out of the pit

YOU ARE IN THE PIT, SO TO SPEAK, BELOW THE RAMPARTS

THERE IS A NARROW PASSAGE FROM EAST TO WEST AND A STONE STAIRWAY LEADING DOWN

THE ROOM IS DEAFENINGLY LOUD WITH AN UNDETERMINED RUSHING SOUND

THE SOUND SEEMS TO REVERBERATE FROM ALL OF THE WALLS, MAKING IT DIFFICULT EVEN TO THINK

>go down

YOU ARE IN THE PIT

>go down the stairway

GO DOWN THE STAIRWAY IS BARRED BY EVIL SPIRITS, WHO BY SOME MIRACLE OF ZORKIAN TECHNOLOGY HAVE STOPPED THE FLOW OF THE RIVER

>kill the spirits

THE SPIRITS, SENSING A GREATER POWER, FLEE THROUGH THE WALLS

>go down the stairway

THE PIT IS TOO NARROW FOR MOST INSECTS

(Even after defeating the spirits, I was still unable to get out of the pit. I was warned I guess)



If you want to play the game you can do it online here!



There are some interesting things going on here. First, GPT is lifting elements from some of the games in its training set (i.e. Zork) and sort of mashing them up and putting them in new places. It does this surprisingly well, i,e, the spirits reacted to me in a reasonable way.



What’s the goal of this game? Exploring, I guess. Even the game is making it up as it goes along, and that’s what makes it interesting



Because none of the logic is explicitly programmed, you can do some other cool things, like making your character start in space for example, instead of a meadow, GPT2 will roll with it, though the training on classic adventure games means that it will occasionally try to fill your spaceship with moss.



Technical stuff

There were a few interesting trends I noticed while programming the game



* GPT2 will happily try to recreate the entire transcripts, including the player’s actions. The solution to this is postprocessing which just truncates the GPT output when it tries to generate a player action.



* Making the game flow is a bigger problem. You need to feed in context (i.e. GPT2′s previous description of the room the player is in) for the output to make any sense, but if you feed in everything that happened in the past GPT2 might decide an enemy you defeated 10 turns ago is still there.



The solution in this game is more post-processing–for instance, a variable that keeps track of the room description and feeds it to GPT2 along with each action. This seems to work fairly well.



* People often say the optimum temperature value for GPT2 is around 0.5-0.7. For this game it is much lower–raising it to the normal range tends to break any semblance of cause and effect in the game. This is likely because in this case we care less about whether the sentences are novel than whether the overall scenario it creates is novel, and also whether it makes sense.



Finally, if you want to play around with re-training the model , you can download the training data here!





