Beware emotional robots: Giving feelings to artificial beings could backfire, study suggests

In the recent movie Rogue One: A Star Wars Story , the face of the character Grand Moff Tarkin was constructed digitally, as the actor who had originally played him had died. Some who knew about the computer trickery saw his appearance as slightly unnatural, leading to a sense of unease. Their discomfort demonstrates what the Japanese roboticist Masahiro Mori referred to in 1970 as the “uncanny valley”: Our affinity toward robots and animations increases as they physically appear more humanlike, except for a large dip where they are almost but not quite there.

But what happens when a character’s appearance remains the same, but observers think its mind has become more humanlike? New research reveals that this, too, unnerves people, a finding that could have possible implications for a range of human-computer interactions.

The study “pushes forward work on the uncanny valley” by showing that “it’s not simply how [something] moves and how it looks, but also what you think it represents,” says Jonathan Gratch, a computer scientist at the University of Southern California in Los Angeles, who was not involved with the work. “There’s going to be a lot more human-machine interactions, human-machine teams, machines being your boss, machines writing newspaper articles. And so this is a very topical question and problem.”

Previous work has shown a discomfort with humanlike robots, with people ascribing more emotions to them. In a study published by the psychologists Kurt Gray of the University of North Carolina in Chapel Hill and Daniel Wegner (now deceased) in 2012, participants watched a brief video of a robot’s head either from the front, where they could see its “human” face, or from behind, where they saw electrical components. The ones who watched its face rated it as more capable of feeling pain and fear, and as a result they felt more “creeped out.”

But what happens when the appearance of an artificial intelligence remains the same but its emotions become more humanlike? To find out, Jan-Philipp Stein and Peter Ohler, psychologists at the Chemnitz University of Technology in Germany, gave virtual-reality headsets to 92 participants and asked them to observe a short conversation between a virtual man and woman in a public plaza. The characters discuss their exhaustion from hot weather, the woman expresses frustration about lack of free time, and the man conveys sympathy for the woman’s annoyance at waiting for a friend.

Everyone watched the same scene, but participants received one of four descriptions. Half were told the avatars were controlled by humans, and half were told they were controlled by computers. Within each group, half were told the conversation was scripted, and half were told it was spontaneous.

Those who thought they’d watched two computers interact autonomously saw the scene as more eerie than did the other three groups. That is, natural-seeming social behavior was fine when coming from a human, or from a computer following a script. But when a computer appeared to feel genuine frustration and sympathy, it put people on edge, the team reports this month in Cognition .

Stein and Ohler call the phenomenon the “uncanny valley of the mind.” But whereas the uncanny valley is normally used to describe the visual appearance of a robot or virtual character, this study finds that, given a particular appearance, emotional behavior alone can seem uncanny. “It’s pretty neat in that they used all the same avatars and just changed the conceptualization of it,” Gray says.

Some work shows that people are more comfortable with computers that display social skills, but this study suggests limitations. Annoyance at waiting for a friend, for example, might feel a little too human. With social skills, there may be not an uncanny valley but an uncanny cliff. When designing virtual agents, Gray suggests, “keep the conversation social and emotional but not deep.”

An open question is why the volunteers who thought they were seeing two spontaneous computers felt distressed. Stein suggests they may have felt human uniqueness was under threat. In turn, humans may lose superiority and control over our technology. In future work, Stein plans to see whether people feel more comfortable with humanlike virtual agents when they feel they have control over the agents’ behavior.

Gray and Gratch say next steps should include measuring not only people’s explicit ratings of creepiness, but also their behavior toward social bots. “A lot of the creepiness may arise more from when you reflect on it than when you’re in an interaction,” Gratch says. “You might have a nice interaction with an attractive virtual woman, then you sit back and go, ‘Eugh.’”