OpenAI Releases Tool for Guardians to Monitor Conversations and Set Usage Limits
OpenAI announced in September 2025 the arrival of new features for parental control in ChatGPT, aimed at protecting teenagers during the use of technology. Now, parents and guardians can link their accounts to their children’s, set access rules, and receive alerts whenever there are conversations deemed sensitive. According to the company, the new feature is already available to all users, at a time when the number of lawsuits against artificial intelligence applications in the United States is rising, which, according to family members, would be impacting the mental health of young people.
Parents Can Set Usage Times and Block Features
OpenAI developed the system to enhance the digital security of teenagers and increase families’ trust. Once users activate the feature, different configuration options appear. Among them, the following stand out:
- Set Usage Times: guardians choose periods when the application cannot be accessed.
- Block Images: it is possible to disable image creation and editing.
- Turn Off Voice: parents can prevent the robot from being used in voice conversations.
- Control Memory: there is an option to prevent ChatGPT from saving past interactions.
- Restrict Training: conversations of teenagers can be blocked from being used for training OpenAI’s model.
Moreover, guardians have the autonomy to activate or deactivate extra protections. However, teenagers cannot change these settings, reinforcing the priority of safety in the process.
-
ABB integrates generative artificial intelligence into the energy management system and accelerates operational insights in the industry with more efficiency, sustainability, and data-driven decision-making.
-
Artificial intelligence is skyrocketing energy consumption, raising emissions from tech giants and pushing Google, Microsoft, and Meta closer to natural gas.
-
Mercor paid $1.5 million per day for doctors, lawyers, and former Goldman Sachs bankers to teach artificial intelligence to do their jobs, and in 17 months, it went from zero to $500 million in annual revenue while its own contractors accelerated the replacement of their own work.
-
Casio Unveils Moflin, Robotic Pet With Artificial Intelligence Designed To Provide Emotional Comfort And Simulate A Permanent Affectionate Bond
Alerts About Conversations Deemed Sensitive
Another highlight of the update is the notification system for problematic interactions. When the technology identifies messages with potential risks, OpenAI sends alerts to parents via e-mail, SMS, or mobile notification. According to the company, a trained team will analyze these situations to provide appropriate support. “We know that some teenagers turn to ChatGPT during difficult times. Therefore, we created a notification system to help parents identify warning signs,” the company stated in a statement.
Feature Arrives Amid Judicial Pressure in the United States
The release occurs in a context of increasing debate about the impact of artificial intelligence on the mental health of teenagers. In 2025, lawsuits in the United States accused AI applications of exposing young people to harmful content, encouraging risky behaviors, including self-harm and even suicide attempts. Therefore, those responsible for ChatGPT interpret the inclusion of parental control as a direct response to criticisms and demands for regulation in the sector. Thus, OpenAI seeks to align its technology with stricter standards of responsibility.
How to Activate Parental Control in ChatGPT
To activate the feature, guardians must access the Settings of ChatGPT and select the Parental Controls option. Another possibility is to go directly through the official platform address. Then, just send an invitation to the teenager, who will receive the request via e-mail or SMS. Guardians will be able to adjust all settings from their own profile once they link the account. This way, parents gain more autonomy and can decide transparently how their children will access AI technology.

Seja o primeiro a reagir!