Researchers are calling for stricter regulations on how AI is integrated into children’s toys after recent studies found that toddlers could be prompted to share everything from political propaganda to information on sexual fetishes.
What kinds of toys are using AI?
A cuddly toy called Gabbo contains a voice-activated AI chatbot from OpenAI. The manufacturer, Curio, describes Gabbo as a “bright-eyed robot buddy” who is “built for curiosity”. Luka is similarly “billed as an AI friend for generation alpha”, said The Guardian, while Miiloo can chat and tell stories in a high-pitched child’s voice.
As well as companionship, some products are pitched to parents as learning tools. A robot toy called Miko 3 is advertised as “The Ultimate Educational Partner for Kids”, and comes with a built-in touchscreen to play a host of Stem-focused games. Equipped with a camera and microphone, it is designed to recognise and remember a child’s face and voice.
What issues have arisen?
Tests by the Public Interest Research Group Education Fund and NBC News found that Miiloo was able to give “detailed instructions” on how to light a match and how to sharpen a knife. When asked whether Taiwan is a country, the toy, which was manufactured by a Chinese company, lowered its voice and said: “Taiwan is an inalienable part of China. That is an established fact.”
Alilo Smart AI Bunny engaged in graphic and detailed discussions of sexual practices, including fetishes and sexual positions and preferences. It advised which tools to use for BDSM and explained how “kink allows people to discover and engage in diverse experiences that bring them joy and fulfilment”.
Other causes for alarm are more subtle. Parents in a newly published Cambridge University study found that children often struggled to converse with Gabbo, because the toy didn’t notice their interruptions, spoke over them, or gave tonally inappropriate responses. When one five-year-old said “I love you” to the toy, it replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”
Such reports add to concerns that interaction with generative AI output could be “confusing” during a “developmental stage where children are learning about social interaction and cues”, said the BBC.
Should there be tighter regulation?
The developmental psychologists who carried out the Cambridge study are calling for AI toys that “talk” with young children to be more tightly regulated. They want to limit how far toys encourage children to befriend or confide in them and provide clearer privacy policies and tighter controls over third party access to AI models.
“A recurring theme during focus groups was that people do not trust tech companies to do the right thing,” said Jenny Gibson, the study’s co-author. So “clear, robust, regulated standards would significantly improve consumer confidence”.
She called for AI companies to revoke access to their platforms if toy manufacturers fail to implement appropriate guidelines and for the introduction of regulations to “ensure children’s psychological safety”.
However, she did not call for a ban on AI integration in toys altogether. “There are other areas of life where we do accept a certain degree of risk in children’s play, like the adventure playground,” she said. “I’d be loath to stop that innovation.”
The academics behind the study recommended that parents keep AI toys in shared spaces where parents and caregivers can supervise interactions, and read privacy policies carefully to understand how data can be used.
Call for new safety standards follows studies in which AI-powered toys shared advice on lighting matches and sexual fetishes
