The robot was a model for how these desires and emotions are reflected in facial expression and how those expressions in turn affect social interaction. Take the drive for novelty. With no stimulus nearby, Kismet’s eyes would droop in apparent boredom. Then a lovely thing happened. If there was a person nearby, she would see Kismet’s boredom and wave a toy in front of the robot’s eyes. This activated Kismet’s program to look for brightly colored objects, which in turn moved the robot into its “aroused” affective state, with a facial expression with the hallmarks of happiness. The happy face, in turn, led the human to feel good about the interaction and to wave the toy some more — a socially gratifying feedback loop akin to playing with a baby.

Kismet is now retired and on permanent display, inert as a bronze statue, at the M.I.T. Museum. The most famous robot now in Breazeal’s lab, the one that the graduate students compete for time with, looks nothing like Kismet. It is a three-foot-tall, head-to-toe creature, sort of a badger, sort of a Yoda, with big eyes, enormous pointy ears, a mouth with soft lips and tiny teeth, a furry belly, furry legs and pliable hands with real-looking fingernails. The reason the robot, called Leonardo (Leo for short), is so lifelike is that it was made by Hollywood animatronics experts at the Stan Winston Studio. (Breazeal consulted with the studio on the construction of the robotic teddy bear in the 2001 Steven Spielberg film “A.I.”) As soon as Leo arrived in the lab, Breazeal said, her students started dismantling it, stripping out all the remote-control wiring and configuring it instead with a brain and body that operated not by remote control but by computer-based artificial intelligence.

I had studied the videos posted on the M.I.T. Media Lab Web site, and I was fond of Leo even before I got to Cambridge. I couldn’t wait to see it close up. I loved the steadiness of its gaze, the slow way it nodded its head and blinked when it understood something, the little Jack Benny shrug it gave when it didn’t. I loved how smart it seemed. In one video, two graduate students, Jesse Gray and Matt Berlin, engaged it in an exercise known in psychology as the false-belief test. Leo performed remarkably. Some psychologists contend that very young children think all minds are permeable and that everyone knows exactly what they themselves know. Older children, after the age of about 4 or 5, have learned that different people have different minds and that it is possible for someone else to hold beliefs that the children themselves know to be false. Leo performed in the video like a sophisticated 5-year-old, one who had developed what psychologists call a theory of mind.

In the video, Leo watches Jesse Gray, who is wearing a red T-shirt, put a bag of chips into Box 1 and a bag of cookies into Box 2, while Matt Berlin, in a brown T-shirt, also watches. After Berlin leaves the room, Gray switches the items, so that now the cookies are in Box 1 and the chips are in Box 2. Gray locks the two boxes and leaves the room, and Leo now knows what Gray knows: the new location for the chips and cookies. But it also knows that Berlin doesn’t know about the switch. Berlin still thinks there are chips in Box 1.

The amazing part comes next. Berlin, in the brown T-shirt, comes back into the room and tries to open the lock on the first box. Leo sees Berlin struggling, and it decides to help by pressing a lever that will deliver to Berlin the item he’s looking for. Leo presses the lever for the chips. It knows that there are cookies in the box that Berlin is trying to open, but it also knows — and this is the part that struck me as so amazing — that Berlin is trying to open the box because he wants chips. It knows that Berlin has a false belief about what is in the first box, and it also knows what Berlin wants. If Leo had indeed passed this important developmental milestone, I wondered, could it also be capable of all sorts of other emotional tasks: empathy, collaboration, social bonding, deception?

Unfortunately, Leo was turned off the day I arrived, inertly presiding over one corner of the lab like a fuzzy Buddha. Berlin and Gray and their colleague, Andrea Thomaz, a postdoctoral researcher, said that they would be happy to turn on the robot for me but that the process would take time and that I would have to come back the next morning. They also wanted to know what it was in particular that I wanted to see Leo do because, it turned out, the robot could go through its paces only when the right computer program was geared up. This was my first clue that Leo maybe wasn’t going to turn out to be quite as clever as I had thought.

When I came back the next day, Berlin and Gray were ready to go through the false-belief routine with Leo. But it wasn’t what I expected. I could now see what I had seen on the video. But in person, I could also peek behind the metaphoric curtain and see something that the video camera hadn’t revealed: the computer monitor that showed what Leo’s cameras were actually seeing and another monitor that showed the architecture of Leo’s brain. I could see that this wasn’t a literal demonstration of a human “theory of mind” at all. Yes, there was some robotic learning going on, but it was mostly a feat of brilliant computer programming, combined with some dazzling Hollywood special effects.