The sounds that make up speech, built from slight variations in vowels and consonants, trigger specific responses in the part of the brain responsible for speech processing, researchers report today in Science1.

Phonemes — such as the 'buh' sound in 'bad' or the 'duh' in 'dad' — are thought to be the smallest linguistic elements that change a word's meaning. But the study suggests that the brain's superior temporal gyrus can recognize even smaller bits of speech, called features, that may be common across languages.

“We’ve known for a pretty long time now what area of the brain is really important for processing speech sounds,” says lead author Edward Chang, a neuroscientist at the University of California in San Francisco. “What we haven’t known is the details about how individual sounds are processed.”

Chang's team made the discovery by working with six patients who were preparing to undergo brain surgery to treat epilepsy. An array of electrodes was implanted in the brain of each person as part of pre-surgical testing. Each volunteer then listened to speech samples comprising 500 sentences spoken by 400 people that covered the entire inventory of phonetic American English sounds.

Temporal triggers

When researchers compared the electrode data to the different phonemes heard by the volunteers, they found that phonemes with similar features seemed to elicit characteristic electric responses in neurons located within each patient's superior temporal gyrus.

Chang sees this as the starting point for understanding the mechanism that underlies the brain's seemingly effortless decoding of a stream of speech. “One of the things that happens in speech and language is that we transform sounds into meaning,” he says. A set of feature units in some combination gives rise to a phoneme; those combine to create a word, and together, groups of words create meaning.

Josef Rauschecker, a neuroscientist at Georgetown University in Washington DC, notes that monkeys are known to have neurons that respond to phonetic features. The discovery of a similar capability in the human brain opens the door to studying the evolution of speech recognition, he says.

Identifying the neural mechanisms that make up normal phonetic coding in the brain can lead to a better understanding of abnormalities, says Mitchell Steinschneider, a neuroscientist at Albert Einstein College of Medicine of Yeshiva University in New York. For people with hearing loss, for instance, this might mean the development of more sophisticated processors to aid artificial hearing, he adds.