More than 40 years ago, two researchers from the US traveled to a remote region in Papua New Guinea, seeking to answer a fundamental question about human nature: Do all people understand and express emotions in the same way? Are the smile and the scowl, the furrowed brow of anger, and the scrunched-up look of disgust universally recognized expressions?

As the intrepid scientists predicted, the Fore tribespeople of Papua New Guinea — an ancestrally “traditional” culture with no Western influences — were able to listen to an emotional story and reliably select a picture of the appropriate facial expression to match. As more experiments were conducted across the globe, the evidence accumulated. It seemed that these basic emotional expressions were universal to all of mankind. The news splashed across headlines. It was the topic of books. It grabbed the public’s attention.

But even when APS William James Fellow Paul Ekman and Wallace V. Friesen first published their Papua New Guinea results, the “universality hypothesis” was not new. Nearly 100 years prior, Charles Darwin posited that there are clear and distinct pathways between emotions and expressions — that is, the affect seemingly hidden in the mind and brain is revealed in a distinct set of facial muscle movements. Moreover, he believed that this was true of all people, in all cultures: “…the young and the old of widely different races, both with man and animals, express the same state of mind by the same movements.”

In some sense, Darwin was correct. The notion seems consistent with the way we interact with others, and Ekman and Friesen’s experiments seemed to prove it. It’s true that people from vastly different cultures smile when they’re happy, laugh when they’re tickled, and tear up when a loved one passes.

But are the expressions of emotion — and the emotions themselves — really as universal as we think? The question draws on a cross-section of psychological investigators, from social cognition researchers to neuroscientists. Moreover, the research incorporates methods and data from other disciplines such as anthropology, sociology, and linguistics.

Research Makes an ‘About Face’

Despite initial support for the “universality hypothesis,” research conducted in the last decade or two provides compelling evidence for a more nuanced idea — that, indeed, there are cultural differences in the expression and interpretation of emotions, and these differences can be parsed out using the right metrics.

One example stems from the relationship between verbal language and the subtle language of emotional expression. In the same way that accents — different pronunciations used by a subset of speakers — exist in verbal languages, they are also manifest in facial expressions of emotion. In a 2003 study, Abigail A. Marsh of Georgetown University, Hillary Anger Elfenbein of Washington University in St. Louis, and the late APS Fellow Nalini Ambady (then at Harvard University) had participants observe photographs of people with various emotional expressions and then judge them on several parameters. Some of the photographs were of individuals of Japanese citizenship and ancestry, whereas others were Japanese-Americans. The subjects looked similar, but were from entirely different cultural upbringings.

As the researchers discovered, American and Canadian participants were able to identify the cultural background of the person in the picture at a level better than chance alone — that is, they could reliably discern whether the individual was Japanese or Japanese-American. They later replicated these results in a similar study, this time having US college students distinguish Caucasian Americans from Caucasian Australians.

Other studies suggest that living in a particular culture and interacting with similar individuals produces an in-group advantage for picking up on their emotions. And central to the emotional “accent” hypothesis, participants in both studies weren’t able to discern nationality for neutral expressions. Rather, there was something specific about the smiles and scowls that gave away the subjects’ heritage.

Marsh, Elfenbein, and Ambady wrote, “People can judge cultural background through nonverbal accents, just as they can judge the geographic backgrounds of people speaking a common language — for example, a Texan versus a Scot — through verbal accents.”

Lending support to these findings, a new study provides further evidence that emotional expressions — and recognition of those expressions — vary based on cultural upbringing. This time, researchers focused on the Himba tribe, an ancestrally “traditional” group of herders and nomads living in rural Namibia.

Using the original anthropological “two-culture approach,” as Ekman and Friesen had 40 years earlier, psychological scientists Maria Gendron and APS Board Member Lisa Feldman Barrett (both of Northeastern University) and colleagues conducted an experiment similar to the classic Papua New Guinea study, but with one important methodological twist. Instead of asking participants to match a facial expression picture to a particular emotional word or story, the researchers instead asked some of the participants to freely sort the facial expressions into piles, with each pile representing a certain emotional state. Thus, these participants had to assess emotions without any cues from the experimenters. The subjects were allowed to create as many piles as they felt necessary. Once the sorting was complete, the participants were asked to provide a word or label to describe the category of each pile.

The researchers reasoned that if participants were consistently able to sort all the smiling faces into one pile, all the scowling faces into another pile, and all the pouting faces into a third — and then unhesitatingly describe the emotions represented in each pile as happy, angry, or sad — then this result would be evidence that the universality hypothesis held firm.

But that’s not quite what happened. Instead, individuals from the Himba tribe sorted all the smiling faces into the same pile and all the wide eyed (“fear”) faces into another pile, but scattered the other faces among different piles. Also, Himba individuals tended to describe the facial expressions in relation to specific behaviors (e.g. “smiling” or “looking at something”). By contrast, individuals from a comparative US group offered the classic internal state emotions we’re familiar with (e.g. “happiness” or “fear”).

But several participants were placed in another experimental condition and were cued with the emotion words — anger, fear, happiness, disgust, sadness, or neutral — as they viewed and sorted the pictures. In this case, participants from both the US and the Himba tribe offered up responses that looked more universal, suggesting that the word cues were the primary source of the “universality” seen in so many previous experiments. Gendron, Barrett, and their colleagues replicated these findings in another set of experiments using nonverbal vocalizations, finding that Himba individuals freely labeled sounds such as growls and sighs using behavioral labels, whereas US participants labeled the sounds with emotion words.

But as is the case in science, it takes several lines of reasoning to feel confident in a new result. Rachael Jack (University of Glasgow, United Kingdom) and colleagues conducted a study to test the universality hypothesis from a different methodological perspective. Using a computer, the researchers showed participants 3D animated models of human faces with randomly selected subsets of muscle movements — a technique that yielded a huge set of 4,800 distinct faces. Participants of Western Caucasian and East Asian cultural backgrounds then indicated which distinct emotion the face was displaying by choosing a label from a small set of emotion words.

As predicted, individuals from Western cultures tended to form six distinct categories, each reflecting one of the classic emotions (e.g. happiness, surprise, fear, disgust, anger, or sadness). But the results were different for the East Asian participants. These individuals showed considerable overlap between emotion categories. That is, their facial expression models of fear, surprise, disgust, and anger shared more similarity in facial movements than the Westerners’, and when viewing a particular muscle movement configuration — one that a Westerner would unhesitatingly call “anger” — East Asians were much more likely to categorize it as surprise or fear instead.

That finding calls into question the notion that human emotion is universally comprised of six distinct emotion categories. And along with a confluence of additional evidence, these experiments have been instrumental in revealing cultural variation in how emotions are conveyed and perceived.

Emotion in Context

As it turns out, contextual factors above and beyond cultural variation seem to be important in other emotional expression domains as well. Exploring these context effects, many researchers argue, is a critical next step in understanding how humans perceive emotions. Rarely do we see a face or hear a voice in isolation; rather, we take in many signals at once, all of which modulate our perception of a single emotional scene. Integrating these multisensory domains has become the focus of recent research on emotional expressions.

A 2010 study by Akihiro Tanaka and colleagues revealed that Japanese people tend to focus more on emotional vocal cues when observing faces than do Dutch people; that is, they pay more attention to the multisensory context in emotional situations. Though research still hasn’t revealed exactly why this is the case, it may be that Japanese people control their facial expressions more than Dutch people do, thereby requiring a greater reliance on the overall multisensory context.

But emotional context cues also seem to be important irrespective of cultural upbringing. For example, a study published several years ago in Science demonstrated that, in general, people tend to focus attention on emotion cues in the body when a facial expression is ambiguous. As research has shown, times of peak emotion don’t necessarily produce easily distinguishable positive or negative expressions in the face. Winning a crucial point in a tennis match, for instance, might produce a face that looks more like pain or anger than ecstatic pleasure.

By cutting and pasting pictures of faces onto “winning” or “losing” body language pictures, Hillel Aviezer and colleagues found that participants actually didn’t use any information from the face to determine whether the subject had won or lost a tennis point. Instead, the researchers discovered that the person’s body was the critical emotional cue, despite the fact that participants thought they were picking up on emotional cues gleaned from the face.

Even the emotion words provided as part of the experimental tasks used to assess emotion perception offer a kind of context that shapes the perception of facial expressions. Barrett and her colleagues demonstrated this in laboratory studies using an experimental procedure called semantic satiation, in which a word or phrase is repeated so often that it loses meaning for the listener. After being repeatedly exposed to emotion words, participants had difficulty recognizing a scowling face as angry or a pouting face as sad.

These findings challenge the universality hypothesis that a face speaks for itself, lending support to a more “constructed” perspective of emotional perception — one that’s highly dependent on context cues (e.g. vocalizations, body language, or emotion words).

Expressing in a Neural Network

As soon as technology allowed researchers to look for emotion in the brain, they discovered more evidence of a culturally, context-specific framework. As several recent studies have shown, there is no clear-cut brain circuit or network for each emotion. Rather, emotions arise from a melding of more basic properties: valence (general positive or negative feelings), overall level of arousal, and, critically, specific situational cues.

A recent study suggests that fear and anger produce similar patterns of brain activity, but that these patterns can be modulated by the context in which the emotion is felt. Intending to explore how fear and anger are represented in the brain, Christine D. Wilson-Mendenhall of Northeastern University and her colleagues had participants read and become familiar with rich, compelling emotional scenarios. The stories were constructed so that participants could feel either anger or fear while immersing themselves in those stories. One scenario, for example, was about the fear that accompanies presenting an incomplete presentation to a boss, and feeling angry at coworkers for failing to contribute. Another was about the fear that comes from walking through the woods in the dark, and the anger that comes from letting yourself become lost at that hour.

After listening to the stories in an fMRI scanner, the participants were told to experience either fear or anger while immersing themselves in the scenario. This was the crucial step: If fear and anger are truly natural kinds, the brain areas active for each emotion should belong to an anatomically constrained circuit or network that is consistent across contexts but is distinct from those for any other emotion. But the scans revealed something interesting: Fear and anger reliably produced activation in the same areas of the frontal and temporal cortex. Not only that, but the two emotions were represented differently when experienced in a situation involving physical danger versus one involving a social evaluation. As the researchers write, “The situation in which an emotion concept was experienced shaped how the emotion was instantiated in the brain.

“According to basic emotion theories, emotions are natural kinds, each produced by a unique, anatomically modular circuit stable across instances of emotion,” say Wilson-Mendenhall and colleagues. “From this perspective, an emotion such as fear should activate one or more brain regions significantly more than should any other emotion, and should also show stability in the areas activated across its instances.”

But that’s not what the researchers found. Fear and anger don’t neatly coincide with a particular circuit or network of brain activation.

This study serves as an important example, but is hardly the only one to examine the neural basis of emotion. A 2012 meta-analysis by Kristen Lindquist (University of North Carolina at Chapel Hill), Barrett, and colleagues statistically summarized 15 years of research on this topic, finding that no one particular brain region is active for a specific emotion — that is, there is little to no one-to-one mapping between an emotion and a brain area. Instead, the researchers found that the same brain areas are activated across emotion categories.

All in all, these findings suggest that studying each emotion as a discrete, essential entity is not the most useful approach. Humans from different cultures express and interpret emotions differently, and the underlying brain mechanisms for any given emotion show an astounding degree of overlap. We should view emotions as emerging from more basic ingredients, shaped in part by situational and cultural contexts.

The Future of Expression

Though researchers are still hammering out the elusive details of a culturally-constructed and context-specific perspective of affective expression, the potential practical uses of such findings are not far off on the horizon.

For instance, while we have uncovered and explored the disconnects that exist between cultures in communicating emotions, we have also learned that effective interactions between cultural groups can break down these barriers, even with only short stints of training and constructive feedback. As Elfenbein writes, “Because in-group advantage results from familiarity with culturally specific elements of nonverbal expression, training and intervention programs can increase familiarity with these elements, thus eliminating in-group advantage. Such training is already starting to take place, for example in work commissioned by the US Army Research Institute for soldiers going overseas.”

In addition, these basic research findings will prove crucial for advancing face- and voice-recognition software, a practical use that has far-reaching implications in an increasingly digital world. Indeed, several companies are already tapping into emotion research, seeking to streamline the somewhat cumbersome human-robot interactions already in place in our mobile phones and cars. It’s possible, for instance, that vehicles of the future could detect sleepiness or the anger of road rage in your face, prompting you to pull the car over to keep safe.

Undoubtedly, marketers are likely to be leading advocates and underwriters for such innovations. Tabling the dystopian visions for a moment, one can imagine how focusing a camera on your face to target an advertisement based on your mood would be a particularly useful tactic for an advertising firm. An understanding of how individuals from disparate cultures express emotions would be a critical slice of knowledge in that effort.

But there are also more widely accepted ideas floating around. Emotion detection software could be used to aid those who need it most — individuals with autism, for instance — and could prompt meaningful improvements in cross-cultural social interactions and quality of life.

As is usual in the feedback loops of research and innovation, these practical uses will ultimately drive scientists to ask new questions about emotion, deconstructing the emotional-expression impediments between cultures. The ultimate goal — a comprehensive understanding of emotional intelligence — is certainly a curiosity-fueled endeavor for scientists, but one that doesn’t culminate with a journal publication or two. 

References and Further Reading:

Aviezer, H., Trope, Y., & Todorov, A. (2012). Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science, 338(6111), 1225–1229.

Barrett, L. F., Mesquita, B., & Gendron, M. (2011). Context in emotion perception. Current Directions in Psychological Science, 20, 286–290.

Darwin, C. (1998). The expression of the emotions in man and animals. Oxford University Press.

Ekman, P. & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124–129.

Elfenbein, H. A. (2013). Nonverbal dialects and accents in facial expressions of emotion. Emotion Review, 5(1), 90–96.

Gendron, M., Roberson, D., van der Vyver, J. M., & Barrett, L. F. (in press). Perceptions of emotion from facial expressions are not culturally universal: Evidence from a remote culture. Emotion. Forthcoming.

Gendron, M., Roberson, D., van der Vyver, J. M., & Barrett, L. F. (2014). Cultural relativity in perceiving emotion from vocalizations. Psychological Science. Advance online publication. doi: 10.1177/0956797613517239.

Jack, R. E., Garrod, O. G., Yu, H., Caldara, R., & Schyns, P. G. (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences, 109(19), 7241–7244.

Lindquist, K. A., Wager, T. D., Kober, H., Bliss-Moreau, E., & Barrett, L. F. (2012). The brain basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 35(3), 121–143.

Marsh, A. A., Elfenbein, H. A., & Ambady, N. (2003). Nonverbal “accents”: Cultural differences in facial expressions of emotion. Psychological Science, 14(4), 373–376.

Tanaka, A., Koizumi, A., Imai, H., Hiramatsu, S., Hiramoto, E., & de Gelder, B. (2010). I feel your voice: Cultural differences in the multisensory perception of emotion. Psychological Science, 21(9), 1259–1262.

Wilson-Mendenhall, C. D., Barrett, L. F., Simmons, W. K., & Barsalou, L. W. (2011). Grounding emotion in situated conceptualization. Neuropsychologia, 49(5), 1105–1127.