His team found that the computer had astonishingly accurate “gaydar,” though it was slightly better at identifying gay men (81 percent accuracy) than lesbians (74 percent accuracy). Notably, the software outperformed human judges in the study by a wide margin.

Stanford Graduate School of Business researcher Michal Kosinski set out to answer the latter question in a controversial new study . Using a deep-learning algorithm, Kosinski and his colleagues inputted thousands of photos of white Americans who self-identified as either gay or straight, and tagged them accordingly. The software then learned physical commonalities — micro quantitative differences based on facial measurements — to distinguish gay from straight features.

What does It mean for the world if facial-recognition software gets really, really, really good? Computers can already reveal many secrets — our banking information, our shopping habits, our medical history. But what if, for instance, a computer could tell us whether someone may have autism? Or whether, from the same photos, whether someone is gay or straight?


Kosinski’s work was based on previous but controversial research that suggests that the hormonal balance in the womb influences sexual orientation as well as appearance. “Data suggests that [certain groups of] people share some facial characteristics that are so subtle as to be imperceptible to the human eye,” Kosinski says. The study, according to Kosinski, merely tested that theory using a respected algorithm developed by Oxford Vision Lab.

Predictably, rights groups, including GLAAD and Human Rights Campaign, were outraged by Kosinski’s study, simultaneously questioning his methods while suggesting that his program was a threat to members of the gay community.

Kosinski is known as both a researcher and a provocateur. He says that one of the goals for the study was to warn us of the dangers of artificial intelligence. He designed his research, he says, to goad us into taking privacy issues around machine learning more seriously. Could AI “out” people in any number of ways, making them targets of discrimination?

But for the sake of argument, let’s suppose that facial-recognition technology will keep improving, and that machines may someday be able to quickly detect a variety of characteristics — from homosexuality to autism — that the unaided human eye cannot. What would it mean for society if highly personal aspects of our lives were written on our faces?


Stanford researchers Michal Kosinski and Yilun Wang, co-authors of a study that claims to show that a computer program can detect sexual orientation from photos of faces. Christie Hemm Klok/The New York Times

The use of facial traits as a diagnostic tool, known as dysmorphology, is nothing new in medicine. Physicians have long used their eyes to help determine whether patients have genetic conditions such as CHARGE and Treacher Collins syndromes. Early on in medical school, for example, students learn the physical traits for Down syndrome (also known as trisomy 21) through textbooks and lecture slides.

I remember the first time I saw a baby with the condition, which appears in patients who have a third copy of chromosome 21, instead of the usual pair. The infant was born in a community hospital to a mother who had declined genetic screening. As he lay in his cot a few hours after birth, his up-slanted “palpebral fissures” (eyelid openings) and “short philtrum” (groove in the upper lip), among many other things, seemed subtle. It only took a glance from my attending, an experienced pediatrician, to know that the diagnosis was likely. (Later on, a test called a karyotype confirmed the presence of an extra chromosome.)

Could AI someday replace a professional human diagnostician? Just by looking at a subject, Angela Lin, a medical geneticist at Massachusetts General Hospital, can discern a craniofacial syndrome with a high degree of accuracy. She also uses objective methods — measuring the distance between eyes, lips, and nose, for example — for diagnostic confirmation. But this multifaceted technique is not always perfect. That’s why she believes facial recognition software could be useful in her work.

Lin stresses that facial recognition technology is just one of many diagnostic tools, and that in most cases it’s not a substitute for a trained clinical eye. She also worries about how widespread use of facial recognition software could be problematic: “The main barrier for me is privacy concerns. . . we want to be sure the initial image of the person is deleted.”


Even so, the field of dysmorphology is rapidly expanding as researchers develop new ways to use the science, augmented by machine learning and big data, to diagnose an increasing number of genetic conditions.

Autism, for one, may involve physical characteristics too subtle for the human eye to detect. A few months ago, an Australian group published a study that used facial-recognition technology to discern the likelihood of autism using 3-D images of children with and without the condition. As in Kosinski’s study, the computer “learned” the facial commonalities of those with autism and successfully used them as a predictive tool.

The lead study author, Diana Tan, a PhD candidate at University of Western Australia School of Psychological Sciences, warns that the technology has its limitations. A diagnosis of autism requires two distinct elements: identifying social and communication challenges, and behavioral analysis of repetitive behaviors and restrictive interests.

Some scientists believe the social-communication difficulties may be linked to elevated prenatal testosterone — known as the “extreme male brain” theory of autism. Facial masculinization may result from this excessive testosterone exposure, and the computer algorithm was good at picking it up, which could explain its ability to predict autism through a photo alone.

The facial recognition technology was less successful in tracking traits related to severity: that is, repetitive behaviors and restrictive interests. While the computer successfully identified children with autism whose behaviors were marked by lack of empathy, sensitivity, and other typically male traits (i.e. social-communication issues), it was less successful in diagnosing the children who predominantly exhibited restrictive and repetitive behaviors. This suggests that the latter aspects may not be related to hormone exposure and the its related physical changes.


“While [the study] supports the ‘hypermasculine brain theory’ of autism,” Tan says, “it’s not a perfect correlation.”

“In my view,” she says, “[our technique] should be complementary to existing behavioral and development assessments done by a trained doctor, and perhaps one day it could be done much earlier to help evaluate risk,” adding that 3-D prenatal ultrasounds may potentially provide additional data, allowing autism risk to be predicted before birth.

Regardless of the technology’s apparent shortcomings, companies have been quick to leverage big data and facial-recognition capabilities to assist diagnosticians. Boston-based FDNA has been developing technology for use in clinical settings over the last five years and released a mobile app for professionals called Face2Gene in 2014. In principle, it’s similar to the facial recognition software used in Tan’s and Kosinski’s studies, but — more than just study pure science — it’s intended to do what doctors like Lin spend decades learning: make diagnoses of genetic conditions based on facial characteristics.

Last year, the company teamed up on a study to use the app to help with autism diagnoses. The work has not yet been validated in the clinical setting, but it is already gaining adherents.

“We have over 10,000 doctors and geneticists in 120 countries using the technology,” says Jeffrey Daniels, FDNA’s marketing director. “As more people use it, the database expands, which improves its accuracy. And in cases where doctors input additional data” — for instance, information about short stature or cognitive delay, which often helps narrow down a diagnosis — “we can reach up to 88 percent diagnostic accuracy for some conditions.”


Phil Schiller, Apple’s senior vice president of worldwide marketing, announces features of the new iPhone X, including the new Face ID facial recognition system. AP Photo/Marcio Jose Sanchez

But many worry that pushing AI in this direction may come at a cost. When Apple presented the iPhone’s advanced facial-recognition technology just a stone’s throw from Kosinski’s Stanford lab earlier this month, the ACLU suggested that this could open up a privacy can of worms. “Face recognition is one of the more dangerous biometrics from a privacy standpoint,” wrote Jay Stanley, a senior policy analyst at the ACLU, “because it can be leveraged for mass tracking across society.”

Information, he says, could be used by third-party developers for surveillance purposes. “There is a difference between a technology we control,” he adds, “and one that is applied to us as a power play.”

Apple, Amazon, and Google have all teamed up with the medical community to try to develop a host of diagnostic tools using the technology. At some point, these companies may know more about your health than you do. Questions abound: Who owns this information, and how will it be used?

Could someone use a smartphone snapshot, for example, to diagnose another person’s child at the playground? The Face2Gene app is currently limited to clinicians; while anyone can download it from the App Store on an iPhone, it can only be used after the user’s healthcare credentials are verified. “If the technology is widespread,” says Lin, “do I see people taking photos of others for diagnosis? That would be unusual, but people take photos of others all the time, so maybe it’s possible. I would obviously worry about the invasion of privacy and misuse if that happened.”

Humans are pre-wired to discriminate against others based on physical characteristics, and programmers could easily manipulate AI programming to mimic human bias. That’s what concerns Anjan Chatterjee, a neuroscientist who specializes in neuroesthetics, the study of what our brains find pleasing. He has found that, relying on baked-in prejudices, we often quickly infer character just from seeing a person’s face. In a paper slated for publication in Psychology of Aesthetics, Creativity, and the Arts, Chatterjee reports that a person’s appearance — and our interpretation of that appearance — can have broad ramifications in professional and personal settings. This conclusion has serious implications for artificial intelligence.

“We need to distinguish between classification and evaluation,” he says. “Classification would be, for instance, using it for identification purposes like fingerprint recognition. . . which was once a privacy concern but seems to have largely faded away. Using the technology for evaluation would include discerning someone’s sexual orientation or for medical diagnostics.” The latter raises serious ethical questions, he says. One day, for example, health insurance companies could use this information to adjust premiums based on a predisposition to a condition.

After considerable backlash from rights groups, Kosinski’s study, which had been approved by the Stanford research ethics board, underwent secondary ethical review by the editors of the Journal of Personality and Social Psychology. The editors have since concluded that there was no ethical breach.

As the media frenzy around Kosinski’s work has died down over the last few weeks, he is gearing up next to explore whether the same technology can predict political preferences based on facial characteristics. But wouldn’t this just aggravate concerns about discrimination and privacy violations?

“I don’t think so,” he says. “This is the same argument made against our other study.” He then reveals his true goal: “In the long term, instead of fighting technology, which is just providing us with more accurate information, we need solutions to the consequences of having that information. . . like more tolerance and more equality in society,” he says. “The sooner we get down to fixing those things, the better we’ll be able to protect people from privacy or discrimination issues.”

In other words, instead of raging against the facial-recognition machines, we might try to sort through our inherent human biases instead. That’s a much more complex problem that no known algorithm can solve.

Amitha Kalaichandran is a resident physician and a health journalist based in Ottawa, Canada. Follow her on Twitter @DrAmithaK.