On a hot, muggy morning in late August, Emily’s dad escorts her into the Autism Center on Cummington Mall for a couple hours of tests. It’s part of a sound-processing study comparing minimally verbal adolescents with high-functioning autistic adolescents who can speak, as well as normal adolescents and adults.

The investigation is painstaking, because every study must be adapted for subjects who not only don’t speak but may also be prone to easy distraction, extreme anxiety, aggressive outbursts, and even running away. “[Minimally verbal children] do tend to understand more than they can speak,” says Tager-Flusberg. “But they won’t necessarily demonstrate in any situation that they are following what you are saying.”

View Video “The study at BU especially was interesting to us because it focused on the kind of autism that Emily has….I know autistic children can behave a certain way—they can be antisocial and so forth—but no one seemed to be addressing the fact that some of these kids can’t communicate.”

That’s obvious in Emily’s first task, a vocabulary test. Seated before a computer, she watches as pictures of everyday items pop up on the screen, such as a toothbrush, a shirt, a car, and a shoe. When a computer-generated voice names one of these objects, Emily’s job is to tap the correct picture. Emily’s earlier pilot testing of this study showed that she understands more than 100 words. But today, she’s just not interested. Between short flurries of correct answers, Emily weaves her head, slumps in her chair, or flaps her elbows as the computer voice drones on—car…car…car and then umbrella…umbrella…umbrella. When one of the researchers tries to get Emily back on task, she simply taps the same spot on the screen over and over. Finally, she gives the screen a hard smack.

The next session is smoother. Emily is given a kind of IQ test in which she quickly and (mostly) correctly matches shapes and colors, identifies patterns, and points out single items within increasingly complicated pictures of animals playing in the park, kids at a picnic, or cluttered yard sales.

View Video Emily’s First Words.

Emily is minimally verbal, not nonverbal. “Words do come out of her,” her dad explains. She’ll say “car” when she wants to go for a ride or “home” when she’s out somewhere and has had enough. Sometimes she communicates with a combination of sounds and signs or gestures, because she has trouble saying words with multiple syllables. For instance, when she needs a “bathroom,” her version sounds like, “ba ba um,” but she combines it with a closed hand tilting 90 degrees—pantomiming a toilet flush.

“That’s a handy one,” her dad says. “She uses it to get out of things. When she’s someplace she doesn’t want to be, she’ll ask to go to the bathroom five or six times.”

The first word Emily ever said was “apple” when she was four years old. “We were going through the supermarket, and she grabbed an apple. Said it, and ate it. It was amazing to me,” her dad recalls.

The final item on the morning agenda is an EEG study, in which Emily must wear a net of moist electrodes fitted over her head while she listens to a series of beeps in a small, soundproof booth. The researchers have tried EEG with Emily twice before in pilot testing. The first time, she tolerated the electrode net. The second time, she refused. This time, with her dad to comfort her and a rewarding snack of gummi bears, Emily dons the neural net without protest.

The point of this study is to see how well Emily’s brain distinguishes differences in sound—a key to understanding speech. For instance, normally developing children learn very early, well before they can speak, to separate out somebody talking from the birds chirping outside the window or an airplane overhead. They also learn to pay attention to deviations in speech that matter—the word “cat” versus “cap”—and to ignore those that don’t—cat is cat whether mommy or daddy says it.

“The brain filters out what’s important based on what it learns,” says Shinn-Cunningham. Some of this sound filtering is automatic, what brain researchers call “subcortical.” The rest is more complicated, a top-down process of organizing sounds and focusing the brain’s limited attention and processing power on what’s important.

EEG measures electrical fields generated by neuron activity in different parts of the brain. “Novel sounds should elicit a larger-than-normal brain response, and that should register on the EEG signal,” Shinn-Cunningham explains. There are 128 tiny EEG sensors surrounding Emily’s head and upper neck. Each sensor is represented as a line jogging along on the computer monitor outside the darkened booth where Emily sits with her dad holding her hand, watching a silent version of her favorite movie, Shrek.

Today’s experiment is focused on the automatic end of sound-processing. A constant stream of beeps in one pitch is occasionally interrupted by a higher-pitched beep. How will Emily’s brain respond? Most of the time, the 128 EEG lines are tightly packed as they move across the screen. However, muscle movements generate large, visible peaks and troughs in the signals when Emily blinks or lolls her head from side to side. Once, just after a gummi bear break, several large, concentrated spikes show her chewing.

Shifts in attention are much more subtle, and the raw data will have to be processed before anything definitive can be said about Emily’s brain. The readout is time-coded with every beep, and the researchers will be particularly interested in the signals from the auditory areas in the brain’s temporal cortex, located behind the temples.

The beep test has six five-minute trials. But, after about twenty minutes, Emily is getting restless. It’s been a long morning. She starts scratching at the net of sensors in her hair. She’s frustrated that Shrek is silent. The EEG signals start to swing wildly. From inside the booth, stomping and moans of protest can be heard. When the booth’s door is opened at the end of the fourth trial, Emily’s eyes are red. She’s crying. Her father and the researchers try to cajole her into continuing.

“Just two more, Emmy,” her dad says. “Can you do two more for daddy?” And Emily answers with a word she can speak, quite loudly. “Noooo!” They call it a day. Emily will return to the center as the experiments move from beeps to words, and they can finish the last two trials then. All in all, it’s been a successful morning. “She did great,” says Tager-Flusberg.