The victory by IBM's Watson over all-time champions Ken Jennings and Brad Rutter that viewers of Jeopardy! saw this week is a milestone for those of us in the field of computer science known as Question Answering, or QA. It's not that we didn't see this coming; as a consultant for IBM's DeepQA team, I've seen Watson beat too many qualified opponents in evaluation rounds to be surprised by this outcome. But Watson is the first system with enough speed, accuracy and confidence to compete with humans at a real-time QA task like Jeopardy!.



Watson in training. Image courtesy of IBM.

The drama of this "man vs. machine" match gives our field a higher public profile and a jolt of credibility that will help us to promote a more effective way of interacting with computers using natural language.

Question Answering isn't a new line of research. The idea of asking a question of a computer in English and receiving a precise answer is something that people, and science fiction writers in particular, have thought about since the dawn of the computer age. One of the earliest QA systems, LUNAR, was built by Bill Woods and his team at Bolt, Beranek and Newman in the 1970's to help scientists retrieve data about Moon rocks. More recently, the explosion of the Internet and on-line information has led to great advances in document retrieval, and search engines like Google have become our default means of access to on-line text.

The big difference between a QA system like Watson and a search engine like Google is that Watson can read the text for you and provide a precise answer. Google will just give you a list of documents it thinks might contain the answer; you have to do the reading and answer-spotting yourself.

It's easy to see why question answering is a more effective way for humans to retrieve specific pieces of information. But until Watson came along, QA systems were typically too slow, too limited to a specific area of knowledge, or too inaccurate to perform well on an unrestricted, real-time QA task like Jeopardy!.