« previous post | next post »

Two interesting popular articles on linguistic aspects of artificial intelligence have recently appeared in the popular press.

The first one is by Richard Powers ("What is Artifical Intelligence?", NYT 2/6/2011):

IN the category “What Do You Know?”, for $1 million: This four-year-old upstart the size of a small R.V. has digested 200 million pages of data about everything in existence and it means to give a couple of the world’s quickest humans a run for their money at their own game.

The question: What is Watson?

I.B.M.’s groundbreaking question-answering system, running on roughly 2,500 parallel processor cores, each able to perform up to 33 billion operations a second, is playing a pair of “Jeopardy!” matches against the show’s top two living players, to be aired on Feb. 14, 15 and 16.

Powers' novels include Galatea 2.2 (1995), in which a fictional Powers

… meets a computer scientist named Philip Lentz. Intrigued by Lentz's overbearing personality and unorthodox theories, Powers eventually agrees to participate in an experiment involving artificial intelligence. Lentz bets his fellow scientists that he can build a computer that can produce an analysis of a literary text that is indistinguishable from one produced by a human. It is Powers' task to "teach" the machine. After going through several unsuccessful versions, Powers and Lentz produce a computer model (dubbed "Helen") that is able to communicate like a human. It is not clear to the reader or to Powers whether she is simulating human thought, or whether she is actually experiencing it. Powers tutors the computer, first by reading it canonical works of literature, then current events, and eventually telling it the story of his own life, in the process developing a complicated relationship with the machine.

The other article on language, humans, and machines is in the most recent Atlantic, where Brian Christian describes and discusses his experiences as a participant in the 2009 Loebner Prize competition ("Mind vs. Machine"):

I wake up in a hotel room 5,000 miles from my home in Seattle. After breakfast, I step out into the salty air and walk the coastline of the country that invented my language, though I find I can’t understand a good portion of the signs I pass on my way—LET AGREED, one says, prominently, in large print, and it means nothing to me.

I pause, and stare dumbly at the sea for a moment, parsing and reparsing the sign. Normally these kinds of linguistic curiosities and cultural gaps intrigue me; today, though, they are mostly a cause for concern. In two hours, I will sit down at a computer and have a series of five-minute instant-message chats with several strangers. At the other end of these chats will be a psychologist, a linguist, a computer scientist, and the host of a popular British technology show. Together they form a judging panel, evaluating my ability to do one of the strangest things I’ve ever been asked to do.

I must convince them that I’m human.

Fortunately, I am human; unfortunately, it’s not clear how much that will help.

Christian's special take on the competition is that he doesn't really care about how the computers do — he wants to be able to prove that he's human; or rather, he's interested in what he has to do to demonstrate his humanity in a conversation, and he gives entertaining examples from previous competitions that focus on this question.

In the end, he does win the "Most Human Human" award, as the human entrant with the best score in a round-robin against the computer entrants.

In some ways a closer fight would have been more dramatic. Between us, we confederates hadn’t permitted a single vote to go the machines’ way. Whereas 2008 was a nail-biter, 2009 was a rout. We think of science as an unhaltable, indefatigable advance. But in the context of the Turing Test, humans—dynamic as ever—don’t allow for that kind of narrative. We don’t provide the kind of benchmark that sits still.

As for the prospects of AI, some people imagine the future of computing as a kind of heaven. Rallying behind an idea called “The Singularity,” people like Ray Kurzweil (in The Singularity Is Near) and his cohort of believers envision a moment when we make smarter- than-us machines, which make machines smarter than themselves, and so on, and the whole thing accelerates exponentially toward a massive ultra-intelligence that we can barely fathom. Such a time will become, in their view, a kind of a techno-Rapture, in which humans can upload their consciousness onto the Internet and get assumed—if not bodily, than at least mentally—into an eternal, imperishable afterlife in the world of electricity.

Others imagine the future of computing as a kind of hell. Machines black out the sun, level our cities, seal us in hyperbaric chambers, and siphon our body heat forever.

I’m no futurist, but I suppose if anything, I prefer to think of the long-term future of AI as a kind of purgatory: a place where the flawed but good-hearted go to be purified—and tested—and come out better on the other side.

The scores and transcripts for the 2009 Loebner competition are here.

Christian's article is adapted from a book to be released on March 1, The most human human: What talking with computers teaches us about what it means to be alive.

Some previous LL posts on relevant topics:

"Philologists say…", 9/28/2005

"Thriving on confusion in the Guardian", 5/24/2006

"To what after shampooing?", 10/20/2007

"For any transformation which is sufficiently diversified…", 7/4/2008

"The Dowdbot challenge", 5/21/2009

"Our love was real", 9/7/2009

"Botty man", 9/14/2009

"Richard Powers on his way to a decision", 10/28/2009

"Just add 'intelligent' and 'informed'", 10/26/2010

"The robot army", 11/28/2010

"The case of the missing spamularity", 12/23/2010

I would have sworn that I posted something about this video when it came out early last summer, but apparently not.

Permalink