Computers may be naturally fluent in binary, but how might they fare with learning human languages? Computers were tested by playing the game Civilization. Random chance let them win about half the time...but then they started reading the instruction manual.


MIT researchers gave their AI this particular challenge because winning a game like Civilization is so complex. At first, the computer only had the most limited of knowledge, as a press release explains:

It begins with virtually no prior knowledge about the task it's intended to perform or the language in which the instructions are written. It has a list of actions it can take, like right-clicks or left-clicks, or moving the cursor; it has access to the information displayed on-screen; and it has some way of gauging its success, like whether the software has been installed or whether it wins the game. But it doesn't know what actions correspond to what words in the instruction set, and it doesn't know what the objects in the game world represent.


From that starting point, the computer eventually managed to win the game about 46% of the time. It reached that success rate by playing the game a bunch of times and being able to gauge how successful its different actions were, maximizing the actions that led to victory and minimizing those that led to defeat. In and of itself, that's moderately impressive but nothing too amazing. But then the researchers gave the computer access to the instruction manual.

Researcher S.R.K. Branavan explains:

"Games are used as a test bed for artificial-intelligence techniques simply because of their complexity. Every action that you take in the game doesn't have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways. [Instruction manuals are a] very open text. They don't tell you how to win. They just give you very general advice and suggestions, and you have to figure out a lot of other things on your own."

So then, if the AI was going to get anything useful out of the instruction manuals, then it really would have to understand what they were saying and figure out how to apply them. And the results? The computer's success rate skyrocketed from 46% to 79%, an absolutely massive leap. According to the researchers, that suggests computers really can learn meaning of words through interaction with the environment around them.

Brown University computer scientist Eugene Charniak backs up just how impressive this finding is:

"If you'd asked me beforehand if I thought we could do this yet, I'd have said no. You are building something where you have very little information about the domain, but you get clues from the domain itself."


Via MIT. Image from Civilization via.