Philip K. Dick was living a few miles north of San Francisco when he wrote Do Androids Dream of Electric Sheep?, which envisioned a world where artificially intelligent androids are indistinguishable from humans. The Turing Test has been passed, and it’s impossible to know who, or what, to trust.

A version of that world will soon be a reality in San Francisco. Google announced this week that Duplex, the company's phone-calling AI, will be rolled out to Pixel phones in the Bay Area and a few other US cities before the end of the year. You might remember Duplex from a shocking demonstration back in May, when Google showed how the software could call a hair salon and book an appointment. To the receptionist on the other end of the line, Duplex sounded like a bona fide person, complete with pauses and “ums” for more human-like authenticity.

Duplex is part of a growing trend to offload basic human interaction to robots. More and more text messages are being automated: ride-sharing apps text you when your car is there; food-delivery apps text you when your order has arrived; airlines text you about delays; political campaigns send you reminders to vote. Smartphones predict the words you might want to complete your own texts; recently, Google’s Gmail has attempted to automate your side of the conversation in emails as well, with smart responses and suggested autocomplete.

These efforts fall short of full automation; they are suggestions you must act on. But even that may soon be a thing of the past: On Wednesday, Bloomberg reported that Google Android creator Andy Rubin’s company, Essential Products, is going all-in to develop a phone that “will try to mimic the user and automatically respond to messages on their behalf.”

Convenient? Maybe. If your pharmacy texts to ask if you want a prescription refilled, it would be nice—I suppose?—if your phone would just respond “yes.” But when you couple automated tasks with human impersonation, you get into uncomfortable territory.

The tech is now good enough to trick us, and the only way we’ll know we’re talking to a bot is because the bot’s creators told it to say so.

It's … weird. As human interaction moved increasingly online—from email and chat apps to social media networks—the question of authenticity has always been a concern. Back when AIM was the new thing, parents worried who their teenagers were actually talking to in chat rooms. (Rightfully so! I was likely texting with some creeps back then, I imagine.) With the introduction of artificially intelligent chatbots, and their growing sophistication, the worries take on a different tenor. No longer are we just concerned that the people we’re communicating with are who they say they are; now we also need to worry whether they are even persons at all.

Privacy experts have worried about this since the beginning of the bot invasion. “The emergence of social bots, as means of entertainment, research, and commercial activity, poses an additional complication to online privacy protection by way of information asymmetry and failures to provide informed consent,” wrote social scientist Erhardt Graeff in a 2013 paper that argued for legislation on social bots that would protect user privacy. In the wake of disinformation campaigns fomented, at least in part, by bots, California passed a law last week requiring online chatbots to disclose that they aren't human.

Consent was a big concern after the Google Duplex demos in May. In the first demo, when the “woman” called to make an appointment for a haircut for a client, she didn't identify herself as a robot. The person on the phone seemed to have no idea she was talking to a robot. Was that ethical? To trick her? If the content of the conversation was similar to what it would have been with a real human, does it matter?

Google’s way of dealing with that question is to build a disclosure into the operational version of Duplex. When people in San Francisco finally use the AI assistant to make appointments, Duplex will alert the person it’s calling that it’s a bot. At least at first, users will only be able to direct the service to make reservations at restaurants that don't have online booking.