Brain uses ‘older’ systems/structures to preferentially process emotion expressed through vocalizations.

It takes just one-tenth of a second for our brains to begin to recognize emotions conveyed by vocalizations, according to researchers from McGill. It doesn’t matter whether the non-verbal sounds are growls of anger, the laughter of happiness or cries of sadness. More importantly, the researchers have also discovered that we pay more attention when an emotion (such as happiness, sadness or anger) is expressed through vocalizations than we do when the same emotion is expressed in speech.

The researchers believe that the speed with which the brain ‘tags’ these vocalizations and the preference given to them compared to language, is due to the potentially crucial role that decoding vocal sounds has played in human survival.

“The identification of emotional vocalizations depends on systems in the brain that are older in evolutionary terms,” says Marc Pell, Director of McGill’s School of Communication Sciences and Disorders and the lead author on the study that was recently published in Biological Psychology. “Understanding emotions expressed in spoken language, on the other hand, involves more recent brain systems that have evolved as human language developed.”

Of nonsense speech and growls

The researchers were interested in finding out whether the brain responded differently when emotions were expressed through vocalizations (sounds such as growls, laughter or sobbing, where no words are used) or through language. They focused on three basic emotions: anger, sadness and happiness and tested 24 participants by playing a random mix of vocalizations and nonsense speech, e.g. The dirms are in the cindabal, spoken with different emotional intent. (The researchers used nonsense phrases in order to avoid any linguistic cues about emotions.) They asked participants to identify which emotions the speakers were trying to convey and used an EEG to record how quickly and in what ways the brain responded as the participants heard the different types of emotional vocal sounds.

They were able to measure:

how the brain responds to emotions expressed through vocalizations compared to spoken language with millisecond precision;

whether certain emotions are recognized more quickly through vocalizations than others and produce larger brain responses; and

whether people who are anxious are particularly sensitive to emotional voices based on the strength of their brain response.

Anger leaves longer traces — especially for those who are anxious

The researchers found that the participants were able to detect vocalizations of happiness (i.e., laughter) more quickly than vocal sounds conveying either anger or sadness. But, interestingly, they found that angry sounds and angry speech both produced ongoing brain activity that lasted longer than either of the other emotions, suggesting that the brain pays special attention to the importance of anger signals.

“Our data suggest that listeners engage in sustained monitoring of angry voices, irrespective of the form they take, to grasp the significance of potentially threatening events,” says Pell.

The researchers also discovered that individuals who are more anxious have a faster and more heightened response to emotional voices in general than people who are less anxious.

“Vocalizations appear to have the advantage of conveying meaning in a more immediate way than speech,” says Pell. “Our findings are consistent with studies of non-human primates which suggest that vocalizations that are specific to a species are treated preferentially by the neural system over other sounds.”

About this neuroscience research

Funding: The research was funded by Natural Sciences and Engineering Research Council of Canada.

Source: Katherine Gombay – McGill University

Image Source: The image is adapted from the McGill press release

Original Research: Abstract for “Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody” by M.D. Pell, K. Rothermich, P. Liua, S. Paulmann, and S. Sethia, S. Rigoulot in Biological Psychology. Published online October 2015 doi:10.1016/j.biopsycho.2015.08.008

Abstract

Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody

This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450–700 ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice.

“Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody” by M.D. Pell, K. Rothermich, P. Liua, S. Paulmann, and S. Sethia, S. Rigoulot in Biological Psychology. Published online October 2015 doi:10.1016/j.biopsycho.2015.08.008

Feel free to share this Neuroscience News.