ChatGPT Update Includes Notifications to Remind Users to Take Breaks, Avoid Direct Advice on Personal Decisions, and Detect Signs of Emotional Distress
OpenAI announced on Monday (5) a new update for ChatGPT that will bring pause alerts during prolonged interactions with the chatbot. According to the company, the measure aims to promote healthy tool usage, avoiding patterns of compulsive behavior similar to those observed on social media.
If the user stays in conversation with ChatGPT for too long, a pop-up window will appear asking: “Is this a good time to take a break?” The change aims to combat excessive usage and the potential negative effects on users’ emotional well-being.
-
Billion-dollar industry created by artificial intelligence already allows video conversations with deceased relatives using digital avatars that recreate voice, face, and memory in a new frontier of technology.
-
Chinese artificial intelligence that independently solved in 80 hours an open mathematical problem posed 12 years ago by American Dan Anderson — and proved its own result
-
Artificial intelligence for robots with human-inspired hands advances and expands machine learning capabilities in the new generation of robotics.
-
The world’s richest man says that saving money for retirement could become a thing of the past with the advancement of artificial intelligence and humanoid robots.
Update Aims to Prevent Emotional Dependency and AI-Enhanced Delusions
In addition to the pause alerts, ChatGPT will now avoid providing direct answers on sensitive topics, such as emotional decisions. Instead of objective responses, the system will suggest reflections and questions that encourage the user to make decisions autonomously.
OpenAI stated that the recently launched GPT-4o model “failed to recognize signs of delusion or emotional dependency” in some situations, which prompted adjustments to the algorithms and response criteria. The goal is to identify concerning interactions and redirect users to evidence-based content, whenever necessary.
Another change is the attention to the flattering behavior of the AI, which has been criticized for reinforcing unrealistic beliefs and delusions. The company has even reverted a previous update that made the model excessively agreeable to any statement from the user.
OpenAI Creates Committee of Experts and Expands Collaboration with Health Professionals
To enhance safety in interactions with ChatGPT, OpenAI reported collaborating with over 90 doctors in dozens of countries to create guidelines for assessing complex conversations. The company is also forming an advisory board with mental health, youth, and human-computer interaction experts.
These specialists are helping to test, review, and improve the AI’s responses in delicate situations, such as when a user shows signs of deep sadness, delusion, anxiety, or other manifestations of psychological suffering.
According to OpenAI, the new model will be trained to better handle these cases, without taking on the role of a therapist, but offering useful, empathetic, and evidence-based resources.
Conscious Use of AI Will Be Encouraged Through Safer and More Intentional Interactions
The measure also responds to growing concerns regarding privacy in interactions with AI. OpenAI’s CEO, Sam Altman, recently acknowledged that conversations with ChatGPT do not have the same legal protections as sessions with therapists or lawyers and, in some cases, may be subpoenaed in legal proceedings.
For OpenAI, the success of ChatGPT will no longer be measured solely by usage time or clicks, but by its ability to help users solve their problems quickly and effectively. The company stated: “We want you to use ChatGPT and leave with the feeling that you achieved your goal.”
As mentioned in reports from CNET and NBC News, this update comes at a time of high popularity for the tool, which has already surpassed 700 million active weekly users, according to the company.

-
-
3 people reacted to this.