Look at all the robots pictured below. Which one do you like best? Chances are, some will seem friendly, some weird, and some will not make much of an impression at all.

That spread of reactions probably includes the uncanny valley, a dreaded pitfall for designers. It’s occupied by robots that look lifelike, yet aren’t quite realistic enough to be convincing and so start creeping people out. Either side of it, so the hypothesis goes, robots that are even more realistic or not very lifelike tend to be more appealing (see graph below).


Where an individual robot falls on that continuum, says Maya Mathur, a biostatistician at Stanford University in California, is becoming ever more important to pin down. Robot designers can then avoid the uncanny valley – if it indeed exists – so we can be comfortable with their creations.

“Robots are transitioning from something that’s part of a technological environment to something that’s a feature of our social environment,” she says, “always teetering on this boundary of being really creepy and really likeable. That’s something we need to understand.”

Mathur and colleague David Reichling at the University of California, San Francisco, selected 80 examples of robot faces, from the cartoonish and metallic MIT robot Kismetto the painstakingly realistic BINA48 (pictured at the start of this story). They asked 66 workers on the online marketplace Amazon Mechanical Turk to rate the faces on a scale from 1 to 100, based on how mechanical and how human they looked.

The workers also had to consider an important question: how enjoyable would it be to interact with that face every day?

In the picture above, the robo-faces are arranged according how they scored, from the most mechanical to the most human. The researchers found that the robots’ perceived friendliness closely matched the predicted uncanny valley curve. As the faces gradually shift from totally mechanical to more lifelike, their likeability scores go up, then plunge, then climb back up again.

In a second round of experiments, Mathur and Reichling asked a second set of 92 Turk workers to play a game of trust with the robot faces. The workers were given a fictional $100 and asked to decide how much to hand over to the robot. The robot would then “invest” its money, triple it, and decide whether and how much to give back to its human friend.

The amount of money that workers chose to give to the robot followed the uncanny valley pattern, though their decisions also seemed skewed by other characteristics, like the robot’s perceived gender.

“There’s a big difference between asking people how much they like a robot and how much they’re willing to actually put their money where their mouth is,” says Mathur. “I think ultimately, these data suggest that the uncanny valley is a real and tangible problem.”

Journal reference: Cognition, DOI: 10.1016/j.cognition.2015.09.008

(Image credits: BINA48: DPA Picture Alliance/Alamy Stock Photo; robo-faces: Maya B. Mathura & David B. Reichling/Elsevier)