Sam Altman, of OpenAI, Claims That Sensitive Interactions with Artificial Intelligence Lack the Same Legal Protection as Sessions with Psychologists
The CEO of ChatGPT warns that intimate conversations with artificial intelligence could be exposed in legal proceedings. In a recent interview, Sam Altman drew attention to the use of the chatbot as an informal therapist, warning that AI does not guarantee confidentiality like mental health professionals do.
It is urgent to establish clear rules to protect the privacy of users who share sensitive issues with ChatGPT, especially young people who use AI as a coach, advisor, or “listener” in moments of vulnerability.
Conversations with AI Do Not Have Guaranteed Legal Confidentiality

During the interview, Sam Altman expressed concern about how teenagers and adults are using ChatGPT for therapeutic purposes. He compared this practice to consultations with doctors or lawyers, who have the legal protection of professional confidentiality—something that does not apply to ChatGPT.
-
A newly discovered asteroid will pass by Earth closer than the Moon next Monday, at just 90,000 kilometers from the planet, according to NASA calculations, with no risk of impact and with a live broadcast by the Virtual Telescope Project.
-
While asphalt-covered cities in the U.S. debate where to find land for solar energy, a study shows that Walmart could generate 11.1 GW just by covering its 3,571 parking lots.
-
Most people believe in at least one of the 6 most common health myths, reveals a global survey with 16,000 people in 16 countries; trust in doctors plummets, use of AI explodes, and experts warn of the dangerous advance of misinformation in health.
-
New study reveals unexpected connection between bees, flowers, and crops and suggests a possible solution to one of the biggest dilemmas currently faced by agriculture and global food production.
“At this moment, if you talk to a therapist, a lawyer, or a doctor about these issues, there is a legal privilege for that. We have not figured that out for when you talk to ChatGPT”, stated the CEO of OpenAI.
According to the company’s policies, interactions with AI can be read by moderators and stored for up to 30 days, even if the user deletes the history. Moreover, legal cases may require OpenAI to disclose these conversations, including chats that have already been deleted.
Data Can Be Used Even Against the User
In addition to temporary retention, the content of conversations is used to train the model and identify misuse of the platform. This means that, unlike apps like WhatsApp or Signal, which use end-to-end encryption, ChatGPT can log and review everything that’s said in the conversation.
The discussion about privacy intensified after a lawsuit filed in June 2025 by outlets such as The New York Times, which legally demanded the preservation of OpenAI’s records—including deleted chats. The company is appealing the decision, which is part of a larger dispute over copyright and access to user data.
ChatGPT Is a Tool, Not a Substitute for Therapy

Altman’s remarks also serve as a warning about the confusion between technology and emotional care. While ChatGPT can offer support in clear and friendly language, it does not replace the clinical listening of a psychologist, nor does it have legal obligations regarding the confidentiality of conversations.
Experts reinforce that for sensitive topics like mental health, trauma, or delicate personal decisions, the best approach is still to seek professional help, where there is ethical, technical, and legal backing. In this case, AI should be viewed as a complementary resource—never as a substitute.
Have you ever used AI to vent or seek advice? Do you believe it should offer confidentiality like health professionals? Share your opinion in the comments.

Be the first to react!