BOT or NOT? This special series explores the evolving relationship between humans and machines, examining the ways that robots, artificial intelligence and automation are impacting our work and lives.

We have heard the voice of our future AI overlord — and it’s making hair appointments for us.

Last week, Google wowed the world by demonstrating a voice assistant called Duplex that sounds eerily human on the telephone, right down the um’s and mm-hmm’s that it uses during its chat with a scheduler at a hair salon.

Some are now questioning how true-to-life the demo actually was. But even if some liberties were taken, Google Duplex was an eye-opener for experts who gathered at Seattle University on Wednesday night for an AI-centric event presented by MIT Enterprise Forum Northwest.

“Seeing that happen so quickly, I think, was a real shock for some people,” said Kat Holmes, a Microsoft veteran who’s the founder of the design company Kata and the author of “Mismatch: How Inclusion Shapes Design.”

Miles Coleman, an instructor at Seattle University who specializes in digital technology and culture, counts himself in that category. “The Duplex thing blew my mind,” he said.

Google says the secret to Duplex’s conversational skill is a recurrent neural network that’s been trained on anonymized phone data. It has learned not only to deal with the sometimes-ambiguous context of human conversations, but also to drop in an occasional umm (technically known as a speech disfluency) and match the cadence of its speech to human expectations.

This summer, Duplex will be deployed as part of Google’s voice assistant for mobile devices. But is humanity ready for artificially intelligent agents that are hard to distinguish from actual humans?

Coleman said the concerns are similar to those posed by social-media bots. Automated postings turned out to be a big factor in the “fake news” controversies stemming from the 2016 presidential campaign. In response, Twitter purged tens of thousands of Russia-linked bots and revised its terms of service to rule out high-volume automated tweeting.

Some want to go even further: In California, for example, there’s a legislative move to force Twitter, Facebook and other social-media outlets to ban bots from impersonating humans, and force the holder of a bot account to disclose that the account is not actually a human.

It might make sense to require Duplex and similar consumer-grade chatbots to provide a similar disclosure, along the lines of the “This Call May Be Recorded” disclaimer we’ve grown used to hearing when calling customer service.

[Update for May 19: Google plans to incorporate such disclaimers, according to Bloomberg News. Google executives reportedly told employees that the Duplex system would identify itself as a Google assistant. And in some jurisdictions, including Washington state, Duplex would also let the people on the other end of the phone line know that the call is being recorded.]

“We’re faking a human, which we give a particular value,” Coleman said. “And when those entities speak, we give a particular kind of weight that we wouldn’t give to just a machine. It’s like the new version of robo-calls, except it’s working for individuals instead of companies.”

Which poses a puzzler: What happens when a robo-caller calls a robo-answerer? Maybe it’s something like this, in which case we won’t have to worry about the AI apocalypse:

More sound bites from the MIT Enterprise Forum discussion, titled “Human Machine Interfaces and the Future of Interaction”: