Giggling and clapping, Hannah danced around the room. I think this ability to get music on demand is neat, too, and I didn’t want to be rude, so I danced with her. But at the same time I was wondering what it’s going to mean for her to grow up with computers as servants.

The research firm eMarketer estimates that 60.5 million people in the U.S.—a little less than a fifth of the population—will use a digital assistant at least once a month this year, and about 36 million will do so on a speaker-based device like Amazon Echo or Google Home. These things are most popular among people age 25 to 34, which includes a ton of parents of young children and parents-to-be.

And these techno-helpers are not just going to get more popular; they will also get better at responding to queries and orders, and they’ll sound more humanlike, too. At the same time, young users like ­Hannah will become more comfortable and sophisticated with the technology, going beyond telling Alexa to play a song. They’ll request help with homework or control devices around their home.

It’s a little worrisome. Leaving aside the privacy implications of kids telling an Internet-connected computer all kinds of things, we don’t know much about how this kind of interaction with artificial intelligence and automation will affect how children behave and what they think about computers. Will they become lazy because it’s so easy to ask Alexa and its peers to do and buy things? Or jerks because many of these interactions compel you to order the technology around? (Or both?)

Children in the MIT Media Lab study using a Google Home device. The researchers hope to eventually design such digital agents so that kids will be able to tinker with them.

Some of that may happen. It seems more likely, though, that as with many technologies before this, the utility of digital assistants will outweigh their drawbacks. Already they’re making an incredible amount of data and computer-aided capabilities available directly to children—even those not yet in kindergarten—for learning, playing, and communicating. With Alexa, kids can get answers to all kinds of questions (both serious and silly), hear stories, play games, control apps, and turn on the lights even if they can’t yet reach a wall switch. And this is just the beginning of the kiddie AI revolution.

Does Alexa have feelings?

I wasn’t sure if Hannah knew whether Alexa is human. So I asked, and this is what she told me: Alexa is “a kind of robot” who lives in her house, and robots, she reasoned, aren’t people. But she does think Alexa has feelings, happy and sad. And Hannah says she would feel bad if Alexa went away. Does that mean she has to be nice to Alexa? Yes, she says, but she’s not sure why.

Her interest in her digital assistant jibes with some findings in a recent MIT study, where researchers looked at how children ages three to 10 interacted with Alexa, Google Home, a tiny game-playing robot called Cozmo, and a smartphone app called Julie Chatbot. The kids in the study determined that the devices were generally friendly and trustworthy, and they asked a range of questions to get to know the technologies (“Hey Alexa, how old are you?”) and figure out how they worked (“Do you have a phone inside you?”).

Cynthia Breazeal, one of the researchers and director of the Personal Robots Group at MIT’s Media Lab (as well as cofounder and chief scientist of the company developing an AI robot called Jibo), says that it’s not new for children to anthropomorphize technology. But now it’s happening a little differently.

For young kids like Hannah who can’t yet read, write, or type but can talk a mile a minute, voice-operated assistants could help build social skills and push boundaries—two things that are key to a child’s development. If nuances in the user’s tone can affect how the digital servants respond—which is not that unlikely in the near future—it’s possible that kids who use them will become more adept at communicating with others (be the others humans or robots).

That would be a change from what Breazeal sees today: a lot of bad behavior when we interact with each other using technology. She thinks that arises from the abstract context of, say, tweeting, where we may not fully appreciate the consequences of our interactions. She sees a huge opportunity for virtual assistants like Alexa, Google Home, and others to be designed in ways that push us to treat others the way we want to be treated.

Even though that isn’t the way Alexa works yet, it is teaching Hannah some things about how to treat machines, at least. Her mom, Susan Metz, tells me that she’s learning there’s a special pattern you have to use when asking Alexa things (you have to say a keyword like “Alexa” first), so she is figuring out that this voice assistant isn’t something you can speak to the way you would a person. Hannah has also learned that she has to be quiet when her mom is talking to Alexa (I can confirm this isn’t carrying over to times that Susan is talking with people). It is possible that simple, routine interactions with this kind of AI will help kids learn even without much advancement in the technology or its design.