Today, machine learning is giving scientists a new way to interpret the subtle movements of the face. Researchers at Carnegie Mellon, for instance, are using a multi-modal algorithm to analyze facial expression based on 68 separate points on the face, including the eyebrows, eye corners, mouth, and nose.

A new system called MultiSense can also track in real-time a person’s head position, the direction of that person’s gaze, and the person’s body orientation. This level of detail can be surprisingly revealing. Looking at what a person’s nose and eyebrows are doing can differentiate between a happy smile and an angry smile, for example, or a smile that’s triggered by a social situation rather than an actual emotion. “So a lot of time what we see [on a person’s face] is the social norm,” says Louis-Philippe Morency, an assistant professor in Carnegie Mellon’s School of Computer Science. “Someone is smiling to smile back, so the dynamics of that smile would be different because of different emotional states and social states.”

Morency and his colleagues are particularly interested in using machine learning to trace connections between facial expressions and emotional state among depressed people. And what they’ve found so far is unexpected. For one thing, depressed people and non-depressed people smile with the same frequency. But the kinds of smiling they did were different. So while depressed people smiled as often as non-depressed people, the depressed people’s smiles lasted for a shorter period of time. (In addition to tracking smile duration, the sensor platform tracks smile intensity on a 100-point scale.)

There was also a pronounced gender difference in facial expressions among depressed people. In one University of Southern California study, Morency and three other researchers found that depressed men frown more often then non-depressed men, but observed the opposite effect among women: Depressed women frowned less frequently than non-depressed women.

“The really interesting next part,” Morency says, “is to see how [these findings are] aligned with social norms.” For instance, many women have had the experience of being told to smile. “Is it related to culture? Is it local? National? International? Or is there even another factor—social, cultural, physiological—that we don’t know yet?”

The implications of this technology, from a health-care perspective, could mean machine learning will help human doctors track their patients’s well being over time—and do so using objective, quantifiable data. In the short term, Morency believes MultiSense’s abilities are on par with an expert clinician. (“I think expert clinicians do see these cues,” he said. “They may not even realize it.”)

There are other implications for this kind of technology. It’s no surprise, for instance, that the U.S. military has funded much of the research into reading facial expressions. The Defense Department is interested in using facial-recognition platforms for treating people suffering from PTSD. It also has a longstanding goal of using such sensors as a way to understand and predict behaviors. Decades ago, the Department of Defense began amassing a huge database of facial expressions for this purpose.