Aristotle was designed to ‘soothe babies, reinforce good manners, help learn a language’ until campaigners argued it would replace caring with fake nurturing

Children’s toymaker Mattel has been forced to cancel plans to produce an AI-powered babysitter, after a raft of complaints that the product would inflict psychological damage on young children.

First announced in January, the device was to be called Aristotle and was a tall cylinder, reminiscent of Amazon’s Echo smart speaker. It would have provided many similar features to smart speakers, allowing owners to purchase nappies online or verbally search for child raising help on the internet.

But the Aristotle was also intended to go much further than most smart speakers do, with a paired bluetooth camera to monitor young children and the ability to interact directly with them, “helping [to] sooth a crying baby … reinforce good manners in kids, and even help kids learn a foreign language”.

A campaign organised by US nonprofit Campaign for a Commercial-Free Childhood demanded Mattel not release the Aristotle. It garnered 1,500 signatures, and argued that the product “attempts to replace the care, judgment and companionship of loving family members with faux nurturing and conversation from a robot designed to sell products and build brand loyalty”.

“Young children should not be guinea pigs for AI experiments,” the campaign letter concluded.

One child psychologist, speaking to the Washington Post, said her main concern “is the idea that a piece of technology becomes the most responsive household member to a crying child, a child who wants to learn, or a child’s play ideas.”



The campaign also attracted the attention of two US politicians, Democratic senator Edward Markey and Republican representative Joe Barton. They sent Mattel a letter at the end of September expressing “serious privacy concerns” about the devices ability to create an “in-depth profile of children and their family”.

“It appears that never before has a device had the capability to so intimately look into the life of a child,” the pair’s letter continued, before asking a series of pointed questions about the abilities of the device, including: whether it would use facial recognition technology; whether responses from children would be recorded and saved; whether the device would be recording even if children weren’t directly engaging with it; whether Mattel would sell information to third parties; and whether it would delete personally identifiable information about customers.

“We welcome the innovative and responsible use of artificial intelligence and speech recognition,” the letter says, “but we believe consumers should know how this product will work, and what measures Mattel will take to protect families’ privacy and secure their data.”

In response Mattel said its new chief technical officer Sven Gerjets “conducted an extensive review of the Aristotle product and decided that it did not fully align with Mattel’s new technology strategy.” Gerjets joined the company in July, six months after the Aristotle was announced.

“The decision was then made not to bring Aristotle to the marketplace as part of an ongoing effort to deliver the best possible connected product experience to the consumer,” the company said.

It’s not the first time Mattel’s got in cyber-trouble. A wifi-enabled Barbie released in 2015 was discovered to be easy to hack into, allowing an attacker access to the doll’s system information, account information, stored audio files and direct access to the microphone.