Westworld is a hell of a show, but the sense of dread it elicits is nothing new. Pygmalion sculpted a woman who came to life. Same goes with the Golem, only with mud. The amalgamated Frankenstein jolted awake to get all murderous. Humans creating life in their own image is a cornerstone of the realm of fiction.

And until recently, they’ve stayed there. But today, ever-sophisticated robots are graduating from Disneyland-style animatronics into increasingly realistic, intelligent beings. Take the famous human replicas of Hiroshi Ishi­guro. Or the theatrical androids from Engineers Arts in the UK, or Sophia, the humanoid without a scalp (OK, maybe that one’s not particularly intelligent). They’re all so entrancing, it’s easy to forget how ethically problematic they could be.

Not in the homicidal Westworld sense—androids anywhere near that smart or physically capable are so far off, it’s not even worth speculating. No, more pressing are the surprising social problems that will come with realistic humanoid robots, which might work the front desk of hotels, or stand in for us at the office, or live with us as companions.

Google ran smack into an early manifestation of those problems last month, when it debuted its Duplex AI-powered voice assistant. The audio algorithm is realistic enough to fool humans into thinking it’s human—and it turns out people don’t like being tricked. Google was forced to clarify that Duplex would introduce itself first as an AI. Which kinda defeats the purpose of making a realistic voice assistant in the first place, but whatever.

Ethical stumbles like this can challenge the budding relationship between humans and physical machines, too. Take ElliQ, a robot-tablet combo that reminds the elderly to stay active while acting as a window into their family’s social media feeds. ElliQ’s designers went out of their way to remind the user they’re talking to a robot. “The voice we say has a robotic accent, so we're not trying to hide that in a voice that's human,” says Dor Skuler, CEO of Intuition Robotics.

ElliQ kind of looks like it has a head, but it doesn’t have eyes. A bit unsettling? Maybe. But it was a conscious choice by Intuition, because humans try to give agency to pretty much anything with eyes. For Skuler, convincing a user that an AI or humanoid robot is human is a dangerous game. “I think it creates the wrong expectation of the experience, and it's somewhat dystopian,” he says. “I don't think we want to live in a world where AIs pretend to be human and try to—I wouldn't say coerce—but lead you down a path where you believe you're talking to a human, and feel these feelings or emotions.”

Which is not to say we can, or should, stop humans from forming relationships with machines. That’s inevitable. In fact, even in beta tests with an early home robot like ElliQ, users see the robot as a "new entity in their lives," Skuler says, rather than a device. To be sure, they know full well it's just a machine—"and yet, there is a sense of gratitude for having something with them to keep them company," Skuler says. (We met ElliQ last year and can confirm it’s pretty charming.)

All this from a very early and relatively simple companion robot. Just imagine the bonds that we'll form with far more advanced machines. Say 50 years from now we’ve got realistic humanoids walking among us. They move a bit weird still, their facial expressions are a bit stiff still, so they betray themselves as machines. This journey into the humanoid future will take us straight through the uncanny valley, or the repulsion we feel when a robot is almost human, but not quite there.