A funny thing happened to Recode writer Tess Townsend when she was using Google’s Allo messaging app with a friend. The friend asked the Allo AI assistant if it was a bot. And Allo, in a moment of confusion, responded with something unexpected–a link to a Harry Potter fan site .

Why? Townsend’s friend had searched for Harry Potter a few days ago. And when he put Allo on the spot with a question about its existence, it spit out a random link it had uttered recently. Allo had done something very frightening: It had aired his private search results.

Google called the error an “issue.” And I’m sure to any software developer, that’s exactly what it was–another bug to be squashed. But anyone who has a child recognized familiar human behavior in that software bug. Like Allo, children repeat things they’ve heard all the time; the only way to mitigate the risk of them uttering something harmful is to not tell them anything. Is that a bug? No! Of course not.

These mistakes are just part of growing up . . . And AI? It’s still in its toddler phase.

These mistakes are just part of growing up, the decades-long process of learning the complex interplay of what people say, who they say it to, and where they want it repeated. And AI? It’s still in its toddler phase.

It’s a problem that neither Silicon Valley, nor all of us using these products, have fully internalized yet. Frankly, it doesn’t help that there’s greater incentive for companies like Amazon to teach Alexa to sell us more products than there is to figure out how to respond to serious topics, like sexual assault or familial loss.

As these conversational interfaces become more advanced, and integrate further into our lives, teaching them etiquette isn’t just about making them pleasant or polite. And it isn’t about loading them with jokes to break the news when they spot your melanoma in a vacation photo, either. Giving AIs the social fluency of a real person is an actual design problem that’s going to take a lot of time to solve.

Until then, it’s on us as smart consumers to acknowledge exactly what artificial intelligence is–a crude attempt at building a mind by loading it with data set after data set. That means that in 2017, AI is not necessarily an agnostic buddy who does our bidding or an omniscient villain we have to fear, either. AI is a toddler we’re asking about erectile dysfunction. And so when it goes blabbing our personal problems to the world–or when it fails to respond with sensitivity–we shouldn’t be surprised.