It’s no secret we’re in the midst of a mental health crisis. Whether you attribute this to social-media-induced nausea or a societal trade-off between “purpose and power,” as explored in Yuvai Hariri’s latest offering Homo Deus, it amounts to much the same thing – an explosion in prescription SSRIs, billable therapy hours, and most disturbingly, suicides. Sadly, we are presently much less capable of diagnosing mental diseases than we are physical ones.

Now, thanks in large part to revolutions in artificial intelligence and bioinformatics, researchers are unraveling many of the mysteries behind mental illness. In particular, a new paper (PDF link) authored by a collaboration of scientists at USC, Carnegie Mellon University, and Cincinnati Children’s Hospital Medical Center claims to have drawn a bead on some of the biomarkers that differentiate depressed and suicidal patients.

You may think you’re good at reading other people’s faces and moods. But when it comes to telling the difference between a person who is suicidal and one who is merely depressed, the signs are much more subtle. To tease out the differences, the researchers looked at a handful of facial gestures displayed by three groups of people – those with suicidal ideation, depressed patients, and a medical control group. During interviews with these groups, they recorded gestures that included smiling, frowning, eye brow raising, and head motioning. The data was then fed into a machine-learning algorithm that looked for correlations between different gestures, alone or in combination, and patient groups.

Remarkably, rather than frowning being the most telling feature in discriminating between depressed and suicidal patients, it was smiling. Specifically, Duchenne smiles versus non-Duchenne smiles held the key to differentiating the groups. A Duchenne smile involves the contraction of muscles surrounding the eyes, while a non-Duchenne smile doesn’t involve the eyes. Those people displaying non-Duchenne smiles were far more likely to possess suicidal ideation than those lacking them.

In speaking with Dr. Morency, one of the researchers behind the study, he expressed hope that this algorithm could see use in a clinical setting helping doctors differentiate between patients who are depressed or suicidal. Obviously, a person who is at risk for suicide requires a different kind of observation than someone who is struggling with a mild bout of depression. Studies such as Dr. Morency’s, however, often raise as many questions as they answer. Particularly, what happens when the algorithms start knowing us better than we know each other? Judging by the speed at which Facebook and Google are applying data mining methods to our social media accounts, that day is likely not far away.

Unfortunately, astute minds like Dr. Morency’s have little to say when it comes to the unintended consequences of their work. Scientists typically draw a strict line between research breakthroughs and the societal application of such. For instance, were an algorithm such as Dr. Morency’s to find its way into the hands of insurance companies rather than medical clinicians, the results could be distinctly unsavory. A propensity towards suicide might negatively affect one’s ability to purchase life insurance, receive promotions, or even obtain a job.

Unless we begin thinking deeply about the application of algorithms such as this, and pushing our legislators to institute sensible standards for their regulation, the genie that is machine learning may well turn out to be a demon.

Now read: What are artificial neural networks?