Following widespread outcry over the ethical dilemmas raised by Google’s new Duplex system, which lets artificial intelligence mimic a human voice to make appointments, Google has clarified in a statement that the experimental system will have “disclosure built-in.” That seems to mean that whatever eventual shape Duplex takes as a consumer product will involve some type of verbal announcement to the person on the other end that he or she is in fact talking to an AI.

“We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important,” a Google spokesperson told The Verge in a statement this evening. “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”

Duplex is not yet a working product, and Google CEO Sundar Pichai introduced it onstage on Tuesday at the company’s I/O developer conference with only a pre-recorded phone call. That demo showcased how Google Assistant could sound much more lifelike when making use of DeepMind’s new WaveNet audio-generation technique and other advances in natural language processing, all of which helps software more realistically replicate human speech patterns. For instance, Duplex is so convincingly human-sounding because Google includes ticks like “uh” and “um” and other more colloquial phrases into the Assistant’s verbal library.

That the many in Google did not erupt in utter panic and disgust at the first suggestion of this... is incredible to me. What of Google's famed discussion boards? What are you all discussing if not this?!?! This is horrible and so obviously wrong. SO OBVIOUSLY WRONG. *headdesk* — zeynep tufekci (@zeynep) May 9, 2018

However, that a piece of software was purposefully tricking a human being — in this case a hair salon receptionist — caused mass alarm among technology critics and those who fear AI technologies are being developed without proper oversight or regulation. Tech critic Zeynep Tufekci called the demo “horrifying,” and the initial positive audience reaction at I/O as evidence that “Silicon Valley is ethically lost, rudderless and has not learned a thing.”

Google had originally said in a blog post penned by engineers Yaniv Leviathan and Yossi Matias that “it’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that.” The duo added that “we want to be clear about the intent of the call so businesses understand the context,” and that Google will be experimenting with how to pull this off in the months before Duplex is expected to enter testing within the Assistant platform come this summer.

But the company did not explicitly state onstage at I/O that disclosure would be a mandatory feature. Later on, after the keynote, Google representatives told The Verge that the company felt a responsibility to inform individuals who do encounter Duplex that they may be talking to a piece of a software, and that the Assistant team was looking to safeguard the product against misuses like spam calling. Now, it seems as if Google taking extra steps to ensure the public that it’s taking a stance of transparency following the online outcry. That includes making sure that Duplex will make itself “appropriately identified” in the future, for the benefit of all parties involved.