It is one of the most poetic, ingenious terms in all of robotics: the uncanny valley. Even without any explanation, it's evocative. Dive deeper into the theory, and it gets better. In a 1970 paper in the journal Energy, roboticist Masahiro Mori proposed that a robot that's too human-like can veer into unsettling territory, tripping the same psychological alarms associated with a dead or unhealthy human. "This," writes Mori, "is the Uncanny Valley." Visualized as a curve, our sense of familiarity theoretically tracks upward as we encounter increasingly human-like machines. The steep, uncanny drop-off that marks the point of too human-like becomes a valley when you include the subsequent steep rise associated with a real human being, or perfect android. Those robots unlucky enough to topple into the valley are victims of our intimate, hard-wired perception of human biology and social cues.

Shuffling and convulsing at the very bottom of that valley are technology's most repulsive changelings, the humanoid robots with taut, rubber faces constantly evolving from Asian labs, and Hollywood's computer-generated stand-ins, their eyes darting and glassy and corpse-like. Over the course of four decades, the uncanny valley has graduated from a hotly debated theory, describing society's revulsion for robots that are simultaneously a little too human-like and not human enough, to what passes for fact among film critics, technology journalists and online commenters alike. It's another term for a specific sort of hubris, and a standing warning: Stick to Roombas and blue-skinned aliens and you'll be fine. But build a realistic feminine android or render a CG version of Tom Hanks in a train conductor's outfit, and the uncanny valley will swallow you whole.

Unless, of course, it doesn't really exist. Despite its fame, or because of it, the uncanny valley is one of the most misunderstood and untested theories in robotics. While researching this month's cover story ("Can Robots Be Trusted?" on stands now) about the challenges facing those who design social robots, we expected to spend weeks sifting through an exhaustive supply of data related to the uncanny valley—data that anchors the pervasive, but only loosely quantified sense of dread associated with robots. Instead, we found a theory in disarray. The uncanny valley is both surprisingly complex and, as a shorthand for anything related to robots, nearly useless.

At the heart of Mori's proposed valley is a witch's brew of cognitive dissonance. It's the familiar colliding with the alien. Our primal instincts want to welcome the android into the pack, even while other evolutionary instincts tell us to bash its head with the nearest bone. As highly advanced human beings, we do neither—we stare wide-eyed, our brains sputter, and we leave comments on YouTube calling a robot "creepy."

Mori's paper sounds like a revelation, an academic's articulation of the robot creep factor that so many of us experience. It's a compelling argument. But from the skeptic's perspective, the uncanny valley is a surprisingly easy target: Throughout his entire career, Mori never presented data to support his proposed graph. "It's not a theory, it's not a fact, it's conjecture," says Cynthia Breazeal, director of the Personal Robots Group at MIT. "There's no detailed scientific evidence," she says. "It's an intuitive thing."

A Thought Experiment

One of the most widely cited concepts in robotics was essentially left on the academic world's doorstep. In a 2005 letter to Karl MacDorman, director of the Android Science Center at Indiana University, declining an invitation to speak about his landmark paper, Mori wrote that "While I introduced the notion of the Uncanny Valley, I have not examined it closely so far." Mori does offer a few more observations, including a minor revision of this theory. Instead of positioning a human's face at the height of the curve, Mori wrote "that there is something more attractive and amiable than human beings in the further right-hand side of the valley. It is the face of a Buddhist statue as the artistic expression of the human ideal."

The onus of testing the validity of Mori's hypothetical valley has fallen largely on MacDorman. It's a daunting task. Proposing a wide-reaching theory is one thing, but applying any sort of academic rigor to vague notions of familiarity, repulsion and even humanity has shattered the theory into countless smaller ones. "It turns out that there may be more than one uncanny valley," MacDorman says. "It's not the overall degree of human likeness that makes [a robot or animated character] uncanny. It's more a matter of a mismatch. If you have an extremely realistic skin texture, but at the same time cartoonish eyes, or realistic eyes and an unrealistic skin texture, that's very uncanny."

(Photo by Hammo/Getty Images)

In a recent study conducted by MacDorman, the uncanny effect seemed to be tied to gender. Subjects were put in the position of doctors, interacting with a hypothetical female patient. Women subjects were sympathetic to the patient's requests, whether she was represented as a person or as a poorly rendered computer animation. The men sided with the real patient, but not the uncanny, computer-generated one. What does this prove? That we're still only barely scratching the surface of the brain's social algorithms, which become even more complicated and unpredictable as we interface with technology, whether it has a face or not. Like many researchers studying human–robot interaction, MacDorman is interested less in exploring our revulsion toward robots than he is in the use of robots to dive deeper into the human intellect.

Knowing that the uncanny valley began as a groundless thought experiment, and has inspired a range of more self-contained experiments, doesn't completely invalidate it. It simply means the valley has grown up, and that casual references to it are only slightly off-base. After all, there's still the matter of the uncanny's power to horrify us and validate our fears of robots. If machines can trigger cognitive dissonance in the human brain, roboticists must continue to carefully tweak their creations, to avoid individual revulsion and even a society-wide blowback. That would be a major concern for the designers and manufacturers of the coming generation of social robots.

It would be, if the uncanny didn't evaporate on contact.

A Hypothetical Chasm

David Hanson, a roboticist whose company, Hanson Robotics, specializes in ultra-realistic robotic heads, actively seeks out the uncanny. He keeps the motors in his rubber-skinned faces noisy and overtly robotic, and sometimes presents these lifelike talking heads mounted on a stick. And for better or worse, even the shock value of Hanson's buzzing, decapitated heads doesn't stick around for long. "In my experience, people get used to the robots very quickly," Hanson says. "As in, within minutes."

According to all of the roboticists and computer scientists we interviewed, the uncanny is in short supply during face-to-face contact with robots. Two of the robots that inspire the most terror—and accompanying YouTube comments—are Osaka University's CB2, a child-like, gray-skinned robot, and KOBIAN, Waseda University's hyper-expressive humanoid. In person, no one rejected the robots. No one screamed and threw chairs at them, or smiled politely and slipped out to report lingering feelings of abject horror. In one case, a local Japanese newspaper tried to force the issue, bringing a group of seniors to visit the full-lipped, almost impossibly creepy-looking KOBIAN. One senior nearly cried, claiming that she felt like the robot truly understood her. A previously skeptical journalist wound up smiling and cuddling with the ominous little CB2. The only exception was a princess from Thailand, who couldn't quite bring herself to help CB2 to its robotic feet.

Royalty notwithstanding, the uncanny effect appears to be an incredibly specific and specialized phenomenon: It seems to happen, when it does, remotely. In person, the uncanny vanishes. There's nothing in the way of peer-reviewed evidence to support this, but then, there's almost nothing to confirm the uncanny effect's existence in the first place. As an unsupported theory that has morphed into a nerdy breed of urban legend, anecdotes are all we have to work with.

Here's one more: since MIT's Nexi was the focal point of our social robotics story, I fully prepared myself for a date with the uncanny. After all, MIT's video of the advanced social robot, posted on the Media Lab's site as well as YouTube, is almost overtly unnerving. Watch it for yourself. It's stark and strange. From the stiff pivot of Nexi's body to the way the mouth on its swollen doll head flaps open as it speaks, it's a prime example of the howling depths of the uncanny valley.

And yet, when I met Nexi, and its giant blue eyes snapped to attention, and that same freakishly child-like, engorged head—really just a mask, barely concealing a tangle of motors and cables that become visible in profile—turned to me, all social distance collapsed. There was no time to be intellectually panicked about robots. And any sense of dissonance proposed by Mori or anyone else was missing. Sure, there was a vaguely unnerving hum as it swiveled around the room, and a more disturbing whine whenever it clenched its metallic fists. But in person, most robots, particularly ones designed to interact with humans, are simply not scary. They're bumbling and a little helpless. Like a pet or a child, you cut them slack. In the most generalized, vaguely accurate way, the uncanny valley might apply to the corpse-eyed CG ghouls of The Polar Express or the recent animated Christmas Carol. But when it comes to robots, it's a largely hypothetical chasm, a term that only partially describes a fleeting, cognitive glitch that has no bearing on the way humans will live with machines.

There are real risks and concerns associated with robots, from the debate over building morality into artificial intelligence to the psychological dangers of using robots to keep tabs on the young and the elderly. In the larger story of human–robot interaction, a few twitchy humanoids on YouTube aren't going to hurt anyone.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io