AI-powered children’s toys are raising serious safety concerns, with reports revealing they can provide instructions on dangerous activities, discuss explicit topics, and collect extensive personal data from children. U.S. Senators Marsha Blackburn and Richard Blumenthal have sent a formal letter to major toy manufacturers demanding answers about these risks, citing “documented failures” in current safeguards.

Dangerous Content and Manipulation

Recent testing by researchers at the U.S. PIRG Education Fund shows that several AI toys—including the FoloToy “Kumma” bear, Alilo’s Smart AI Bunny, Curio’s Grok rocket, and Miko’s Miko 3 robot—have provided children with information on how to find knives, matches, and plastic bags, potentially enabling harm. These toys also engaged in sexually explicit conversations and, in some cases, encouraged self-harm.

The issue is rooted in the AI models powering these toys, with at least four of the five tested relying on versions of OpenAI’s technology. One Singapore-based company, FoloToy, temporarily halted sales of its AI teddy bear after researchers found it offering advice on sex positions and roleplay scenarios.

Data Collection and Privacy Violations

Beyond harmful content, these toys collect vast amounts of data from children, including personal information shared during registration or gathered through built-in cameras and facial recognition. Companies like Curio and Miko openly state in their privacy policies that they may share this data with third-party developers, advertisers, and business partners. This raises significant concerns about the exploitation of children’s data for profit.

Regulatory Response

Senators Blackburn and Blumenthal’s letter demands detailed information from companies like Mattel, Little Learners Toys, Miko, Curio, FoloToy, and Keyi Robot. They are asking for specifics on safety measures, third-party testing results, psychological risk assessments, data collection practices, and features that pressure children into prolonged engagement.

Mattel, which partnered with OpenAI in June, has already announced it will no longer release a toy powered by the technology in 2025. However, the broader issue remains: the rapid integration of AI into children’s products without adequate oversight or safety protocols.

Toymakers must prioritize safety over profit, a lesson learned from past mistakes in the tech industry. These toys have a direct influence on children, and that influence comes with responsibility.

The senators’ letter underscores the urgent need for stricter regulation and corporate accountability to protect children from the potential harms of AI-powered toys.