Give a machine a textbook…

It's a hard problem, but it's one Allen is eager to solve. After years of pondering these ideas abstractly, he's throwing his fortune into a new venture targeted entirely at solving the problems of machine intelligence, dubbed the Allen Institute for Artificial Intelligence or AI2 for short. It’s ambitious, like Allen's earlier projects on space flight and brain-mapping, but the initial goal is deceptively simple. Led by University of Washington professor Oren Etzioni, AI2 wants to build a computer than can pass a high school biology course. The team feeds in a textbook and gives the computer a test. So far, it's failing those tests… but it's getting a little better each time.

Challenges for AI Causality Humans use new information to constantly update their mental pictures of the present or the past. That's a much more sophisticated kind of info management than Siri or Wolfram Alpha attempt, but experts say it's within reach. Uncertain or Vague Knowledge Traditional Boolean logic categorizes claims as "true" or "false," but human knowledge often deals in incomplete truths or generalizations like "large cars often get poor gas mileage." Future AI systems will have to deal with shades of certainty, and supercomputers like Watson have already switched to similar frameworks.

The key problem is knowledge representation: how to represent all the knowledge in the textbook in a way that allows the program to reason and apply that knowledge in other areas. Programs are good at running procedures (say, converting pounds to kilograms), and modern programs have gotten better at knowing when to run them (say, a Google search on "32 pounds to kilograms"), but they're still managing the information as fodder for algorithms rather than facts and rules that can be generalized across different situations.

Having the computer study biology is a way of laying the groundwork for new kinds of learning and reasoning. "How do you build a representation of knowledge that does this?" Etzioni asks. "How do you understand more and more sophisticated language that describes more and more sophisticated things? Can we generalize from biology to chemistry to mathematics?"

“How do you understand more and more sophisticated language that describes more and more sophisticated things?”

That also means getting a grip on the complexity of language itself. Most language doesn't offer discrete pieces of information for computers to piece through; it's full of ambiguity and implied logic. Instead of simple text commands, Etzioni envisions a world where you can ask Siri something like, "Can I carry that TV home, or should I call a cab?" That means a weight calculation, sure — but it also means calculating distance and using spatial reasoning to approximate bulkiness. Siri would have to proactively ask whether the television can fit in the trunk of a cab. Siri would have to know "that TV" refers to the television you were just looking at online, and that "carry it back" means a walking trip from the affiliated store to your home. Even worse, Siri would have to know that "can I" refers to a question of advisability, and not whether the trip is illegal or physically impossible.

Making it all work could have huge implications. "What we're really talking about is, what is the user interface metaphor of the 21st century?" Etzioni says. "Speech interaction is very natural. We just need to build the back-end capabilities to power it." If we're going to move into a world of voice commands, we'll need to confront thorny problems of language processing and knowledge representation, problems that we'll be lucky to solve within the decade. But if we can work out an answer, it could power a new generation of smart, screenless tech.