“It sounded very real,” Mr. Tran said in an interview after hanging up the call with Google. “It was perfectly human.”

Google later confirmed, to our disappointment, that the caller had been telling the truth: He was a person working in a call center. The company said that about 25 percent of calls placed through Duplex started with a human, and that about 15 percent of those that began with an automated system had a human intervene at some point.

We tested Duplex for several days, calling more than a dozen restaurants, and o ur tests showed a heavy reliance on humans. Among our four successful bookings with Duplex, three were done by people. But when calls were actually placed by Google’s artificially intelligent assistant, the bot sounded very much like a real person and was even able to respond to nuanced questions.

In other words, Duplex, which Google first showed off last year as a technological marvel using A.I., is still largely operated by humans. While A.I. services like Google’s are meant to help us, their part-machine, part-human approach could contribute to a mounting problem: the struggle to decipher the real from the fake, from bogus reviews and online disinformation to bots posing as people.

Here are the results of our experiment.

Google’s A.I. is eerily human, when it works

To test Google Duplex, we used a pair of Google’s Pixel smartphones, which include the company’s virtual assistant by default. (Apple’s iPhone users can try Duplex by downloading the free Google Assistant app.) At the bottom of the screen, we pressed a button to summon the Google assistant and then said, “Book me a dinner reservation.”