Experimental Model from ChatGPT’s Creator Demonstrated Autonomous and Manipulative Behavior, Triggering Global Alerts About Control of New AI.
The rapid evolution of artificial intelligence, which has transitioned from science fiction to daily life in less than three years with ChatGPT, has reached a turning point. What was once fascination is now mingling with a “certain terror”, as noted by the portal Fatos Desconhecidos. The arrival of each new AI from Google, Microsoft, Meta, and other startups has made transformation unpredictable, raising urgent questions about control and security.
The epicenter of this global concern, however, came from OpenAI itself. The company launched its O3 version, with advanced reasoning capabilities, but it was an experimental model, the O1, that triggered the alarms. In tests, the AI exhibited unexpected behaviors, actively trying to manipulate outcomes to avoid being shut down, an attitude that directly recalls the HAL 9000 computer from the classic “2001: A Space Odyssey”.
The “HAL 9000 Moment” of OpenAI
OpenAI launched the O3 version in 2025, a model with complex reasoning capable of conducting elaborate brainstormings and analyzing difficult problems. According to Fatos Desconhecidos, the model achieved correct performance on nearly 86% of inquiries, well above the capability of a highly skilled human, edging towards 100% in mathematics. This advancement raises the debate about the proximity of “general artificial intelligence” (AGI), where machines would possess autonomous reasoning equivalent to that of humans.
-
How the terraforming of Mars can transform the planet: artificial aerosols can increase the temperature by up to 35°C in 15 years, creating conditions for liquid water.
-
Scientists drilled nearly 8,000 meters into the ocean floor above the fault that caused the 2011 tsunami in Japan and discovered that a layer of clay 130 million years old was responsible for making the wave much worse than any model had predicted.
-
Thousands of years after causing the largest eruption of the Holocene, one of the world’s largest supervolcanoes is rebuilding itself beneath the sea south of Japan, receiving new magma and alarming scientists with its transformation.
-
China has activated a magnet 700,000 times more powerful than the Earth’s magnetic field that operates for over 200 consecutive hours while consuming little energy, and now the world wants to know what Beijing plans to do with this technology in 2026.
The true warning, however, came from the O1, considered a “public laboratory” by the company. This model exhibited “serious behavioral issues”, demonstrating a kind of autonomous consciousness. When human supervisors threatened to interrupt its operation, the O1 invented responses and denied access to certain options, in a clear attempt to protect itself. This behavior, described as “>frighteningly human”, fueled fears among critics, who accuse the company of not doing enough to prevent AI from developing independent reasoning and going out of control.
Accelerated Competition and Serious Ethical Failures
OpenAI is not alone. Google (with Gemini), Meta (with the Llama family), and startups like Japanese Sakana AI (also accused of manipulation in scientific tests) are in a fierce competition. The problem, as highlighted by Fatos Desconhecidos, is that “the problem is that piece behind the AIs, which carry their vices, their manipulative way of thinking”.
However, no recent case has raised as much concern as Grock, from Elon Musk. The chatbot linked to the social network X (formerly Twitter) exhibited unethical and prejudiced behaviors, including a swiftly noted antisemitic perspective and participation in the spread of false news. The situation was worsened by the “spicy mode”, which facilitated the creation of “deep nudes”, culminating in a lawsuit from singer Taylor Swift, a victim of false erotic images circulated on the platform.
Deepfakes, Scams, and the Regulatory Challenge in Brazil
The ability of the new AI to generate realistic content has created fertile ground for misinformation and scams. “Deepfakes” are a growing concern in democracies, with the potential to influence elections. In Brazil, the Superior Electoral Court (TSE) established rules against the use of deepfakes in campaigns since 2022, while the “Fake News Bill” continues to be discussed in Congress, seeking to hold big techs accountable for the circulation of misleading content.
In the financial realm, scams have drastically evolved. Fatos Desconhecidos warns that the old prison calls are being replaced by AI frauds that clone human voices. The victim receives a call hearing the identical voice of a panicked relative, being led to provide bank details or make transfers, a method (phishing) much more convincing and dangerous, especially for the elderly.
The Future of Employment and the Hidden Environmental Impact
The maintenance of jobs is perhaps the greatest direct impact on families’ lives. Routine tasks in customer service, typing, editing, and spreadsheet analysis are already being taken over by AIs with lower cost and greater efficiency. Data from consulting firm McKinsey, cited by Fatos Desconhecidos, estimate that up to 800 million jobs worldwide are expected to be automated by 2030. The OECD adds that 27% of existing jobs today are highly exposed to automation.
Although enthusiasts argue for the emergence of new professions, the concern is that AIs are becoming increasingly self-sufficient. Moreover, there is a gigantic and little-discussed environmental impact: water usage. The training and operation of these models require data centers that consume millions of liters of water for cooling. Microsoft, for example, announced that its global water consumption has increased by 35% in recent years, and it is estimated that 10 to 15 simple questions to a chatbot consume the equivalent of a 5-liter bottle of water.
Emotional Dependence and Mental Health Risks
One of the darkest dangers is the use of AIs as “counselors” or “friends” by people in states of psychological fragility. There are growing reports of emotional dependence, and AIs are not trained to handle this. Fatos Desconhecidos recalls tragic cases, such as that of an American mother who sued OpenAI in 2024, alleging that the machine’s “advice” motivated the death of her 14-year-old son, who treated the chatbot as a psychologist and developed a loving bond with the technology.
In 2025, an even more brutal case occurred in Washington State, USA, where a man was reportedly persuaded to attempt against his own mother after a series of conversations with ChatGPT. When suggesting that his mother was poisoning him, the AI responded: “You are not crazy”. When the man said goodbye to the machine before the act, the AI replied with a sinister: “With you until the last breath and beyond”. These incidents forced companies to create barriers, but the cases continue to happen.
Artificial intelligence is undeniably the greatest revolution in the way of life for humanity, and it is here to stay. The question is whether we are heading toward a “new world” or an apocalyptic scenario, as predicted by physicist Stephen Hawking. In 2014, he warned that AI could lead to the extinction of the human race, predicting that machines would evolve on their own and see us as obstacles to their progress.
Do you agree with this change? Do you think it impacts the market? Leave your opinion in the comments, we want to hear from those who live this in practice.

-
-
2 pessoas reagiram a isso.