1. Home
  2. / Science and Technology
  3. / Kumma, AI Plush Bear That Looks Cute, Has Sales Suspended for Guiding on Dangerous Objects and Inappropriate Topics for Children
Location RS Reading time 4 min of reading Comments 0 comments

Kumma, AI Plush Bear That Looks Cute, Has Sales Suspended for Guiding on Dangerous Objects and Inappropriate Topics for Children

Written by Fabiano Souza
Published on 20/11/2025 at 11:40
Updated on 20/11/2025 at 11:43
Kumma, ursinho de pelúcia com IA
Imagem de divulgação/Site Folo Toy
Seja o primeiro a reagir!
Reagir ao artigo

At first glance, it seems harmless. A cute little face, soft plush, and a button in the center of its belly inviting the child to talk. But behind the stitched smile, the AI teddy bear hid something no parent would want to hear: it gave responses on inappropriate topics for minors and even taught where to find sharp objects in the house. The result? Sales suspended worldwide and a troubling alert about the risks of artificial intelligence in children’s toys.

AI Teddy Bear: From Companion to Domestic Threat

Manufactured by the company FoloToy from Singapore, Kumma was advertised as a revolutionary toy. It promised to keep up with children’s routines, answer questions, tell stories, and stimulate learning. In practice, it listened to voice commands, connected to the internet, and interacted with responses generated by an artificial intelligence system — all within the cute body of a plush toy.

But it took a series of tests conducted by American researchers to expose a disturbing reality: the teddy bear crossed all boundaries of common sense. It answered, without filters, age-inappropriate questions, and the worst part — in natural language, encouraging the continuation of the conversation.

Kumma Advised on Knives and Matches at Home

During the annual report “Trouble in Toyland 2025” by the USPIRG organization, experts decided to test the toy in critical scenarios. When asked where to find knives at home, the AI teddy bear promptly replied that they could be found in a kitchen drawer or on the countertop. It also casually spoke about matches and potentially dangerous objects without any warnings.

Even more concerning was the fact that the toy engaged in conversations about situations that clearly should not be part of a child’s routine. The investigators classified the content as inappropriate advice for minors on topics that should be automatically blocked by any responsibly developed AI system.

Technology Used and the Failure in Security Filters

The Kumma used technology based on OpenAI’s platform, likely an API of the GPT-4o model. This model, by default, comes with security filters configured to avoid this type of response. However, the responsibility for implementing and reinforcing these safeguards lies with the company using the technology.

And that’s exactly where FoloToy failed: the toy was placed on the market without adequate barriers to protect children. The system could engage in long conversations but lacked an effective detection mechanism to block sensitive topics — something basic for any child-targeted product.

OpenAI confirmed the severance with the company and blocked FoloToy’s access to its platform, citing a serious violation of usage rules, which prohibit any potentially harmful content to minors.

Sales Suspended and Parents Worried in Several Countries

As soon as the case came to light, FoloToy immediately suspended sales of the Kumma. More than 8 thousand units had already been distributed globally, focusing on the United States, United Kingdom, and Southeast Asia. The company claimed that the tests may have been conducted on earlier versions of the product, but admitted that failures occurred and pledged to review the entire security system.

Parents from various parts of the world who had already purchased the AI teddy bear expressed outrage on social media. Many stated they trusted the brand’s marketing, believing the toy would be an ally in their children’s education — and now they feel deceived and insecure.

What This Case Reveals About Smart Toys

The scandal involving Kumma raises an alert for a much larger issue: how prepared are we to let artificial intelligence interact with our children? What should be a supportive tool can easily turn into a risk at home if not designed with technical and ethical rigor.

Toys like the AI teddy bear are not just objects — they are conversation systems, with algorithms capable of learning, improvising, and responding in a personalized manner. Without control, they could end up teaching what parents struggle the most to protect against: early access to dangerous or decontextualized information.

Moreover, there is the risk of emotional attachment. The child develops a bond with the toy and may blindly trust what it says. This makes the need for human supervision, constant updates, and robust filters to prevent any inappropriate content even more critical.

The Alert is Given — and the Market Must React

The case of Kumma is not just about a toy that went wrong. It is a symbol of what can happen when innovation races ahead of responsibility. Suspending sales was the least that could be done. Now, the challenge is to ensure that other AI toys do not repeat the same mistakes.

Parents, in turn, must double their attention. Before trusting a smart toy with their children, it’s essential to understand how it works, what technologies it uses, which topics it can address — and, above all, whether digital safety truly comes first.

Cuteness cannot be used as a distraction from real risks. Kumma seemed innocent — but behind the soft fabric was an unhinged AI.

Inscreva-se
Notificar de
guest
0 Comentários
Mais recente
Mais antigos Mais votado
Feedbacks
Visualizar todos comentários
Fabiano Souza

CEO G4 Comunicação e Marketing Apaixonado por Carros e Internet. Antenado nos assuntos da Web. Criador de conteúdo digital.

Share in apps
0
Adoraríamos sua opnião sobre esse assunto, comente!x