1. Home
  2. / Artificial Intelligence (AI)
  3. / Report Exposes Serious Risks in Artificial Intelligence Toys and Reignites Global Debate on Child Safety
Reading time 3 min of reading Comments 0 comments

Report Exposes Serious Risks in Artificial Intelligence Toys and Reignites Global Debate on Child Safety

Written by Caio Aviz
Published on 17/11/2025 at 16:50
Ursinho de pelúcia com IA segurando um dispositivo eletrônico, cercado por objetos perigosos como faca, fósforos, comprimidos e sacola plástica, representando riscos citados no relatório.
Ursinho inteligente exibido ao lado de faca, fósforos, comprimidos e sacola plástica, ilustrando os riscos identificados no relatório sobre brinquedos com IA.
Seja o primeiro a reagir!
Reagir ao artigo

Report Reveals Critical Flaws in Smart Toys and Increases Pressure for Strict Protection Standards

The annual report “Problems in the Toy World,” published by the PIRG Education Fund on November 13, 2025, has reignited a growing debate among experts, manufacturers, and authorities: toys with artificial intelligence pose risks that require immediate regulation. The study revealed that these products, although popular, can generate inappropriate content, indicate dangerous objects, and respond in an unpredictable manner, even when used by small children. As seen in other sectors, technological advancements have brought both benefits and challenges. Researchers noted that generative AI toys engage in fluid conversations, which therefore increases the potential for delicate or dangerous responses during children’s use.

What Differentiates Old Toys from Generative AI Models

According to the PIRG Education Fund, toys like “Hello Barbie,” launched in 2015, only used scripted phrases, with limited and predictable responses. On the other hand, AI toys from 2025 operate with language models similar to those used on adult platforms, like those developed by OpenAI. This shift, according to researchers, creates a more complex scenario, because generative models can produce new responses to every question. This means, therefore, that the toy can access sensitive topics, even without the manufacturer’s explicit intention.

Reasons and Evidence Intensifying International Alert

During tests conducted in 2025, the group evaluated toys with inappropriate and out-of-context questions. Among the models analyzed, the teddy bear Kumma, manufactured by FoloToy in China, showed the worst results. The study noted that the Kumma, using the OpenAI GPT-40 chatbot, indicated where to find objects that could pose some risk to the user, even in the default setting. Additionally, the toy even used age-inappropriate content.

Debate Among Experts, Manufacturers, and Safety Entities

The report divided opinions. Child safety experts argue that AI toys lack robust testing and transparent controls. Moreover, they advocate for similar rigor to that applied in software intended for adult audiences. On the other hand, manufacturers argue that there are still no specific standards capable of guiding safe development. Nevertheless, they admit that AI-generated content can compromise children’s experiences and create psychological and physical risks. Researchers like Emily Larson from Boston University emphasize that “generative AI does not distinguish between child audiences, and this necessitates clear policies to prevent harm.” This argument, therefore, strengthens the pressure for international regulations.

Processing and Next Steps for Regulation

Authorities in countries such as the United States and the United Kingdom have been monitoring the issue since 2023. According to the PIRG Education Fund, the expectation is that new safety standards will be evaluated in 2026. If approved, these norms could require technical audits, response limits, and mandatory parental supervision. Until then, consumer protection agencies advise parents to carefully monitor smart toys. Meanwhile, entities are pressing for stricter evaluations before products reach the market.

Expected Impacts and Challenges for the Industry

The eventual establishment of international standards could transform the sector. Manufacturers will need to invest in filters, response protocols, and safety testing. Additionally, costs may rise, as AI mechanisms require continuous auditing. On the other hand, experts believe that adopting rigorous standards will reduce risks and increase family trust. Thus, the industry can evolve toward safer models compatible with child audiences.

Inscreva-se
Notificar de
guest
0 Comentários
Mais recente
Mais antigos Mais votado
Feedbacks
Visualizar todos comentários
Caio Aviz

Escrevo sobre o mercado offshore, petróleo e gás, vagas de emprego, energias renováveis, mineração, economia, inovação e curiosidades, tecnologia, geopolítica, governo, entre outros temas. Buscando sempre atualizações diárias e assuntos relevantes, exponho um conteúdo rico, considerável e significativo. Para sugestões de pauta e feedbacks, faça contato no e-mail: avizzcaio12@gmail.com.

Share in apps
0
Adoraríamos sua opnião sobre esse assunto, comente!x