Sharing Personal Data With Artificial Intelligence Tools Might Seem Harmless, But It Can Be Expensive. Know Which Information Should Never Be Shared in Chats Like ChatGPT, Claude, or Gemini, According to Alerts From Cybersecurity Experts.
Artificial intelligence models have become part of the daily lives of millions of people, but few users are aware of the real risk of sharing sensitive information in chats like ChatGPT.
Despite the warnings issued by the platforms themselves, it is still common for internet users to reveal more than they should during conversations with these tools.
As reporter Nicole Nguyen of the American newspaper Wall Street Journal warned, there are at least five types of data that should never be shared with AI tools.
-
One of the most important lakes in the United States for birds, the balance of nature, and the climate has dried to historic levels, leaving boats stranded, revealing salt flats, and showing how the lack of water can transform an entire landscape.
-
It’s not Cristiano Ronaldo or Messi: the richest football player in the world is only 27 years old, has a fortune of 100 billion reais, and is the nephew of a sultan from an Asian country that few people know about.
-
At 77, the king of one of the most famous and powerful crowns on the planet is said to be under pressure to leave the throne, while behind the scenes there are already talks of a new coronation that could cost R$ 260 million.
-
Chinese researchers have created bamboo drones that fly with the same precision as conventional models and have released the flight control software for free to the entire world in a technology that could revolutionize sustainable aviation.
The risk is not only in individual exposure but also in how companies use this information to train future models, which may include sharing with third parties.
Personal Data Feeds the Models
Applications like ChatGPT, Gemini (Google), and Claude (Anthropic) make it clear in their terms of use that data provided may be used to improve the systems.
In other words, everything the user writes there can become part of the AI training database.
Although many companies claim that the process is anonymized, there are no absolute guarantees about privacy.
“Do not enter confidential information or any data you would not want a reviewer to see,” states the official recommendation from Gemini, from Google, reinforcing the risks of exposure.
The situation becomes even more delicate when it comes to foreign platforms with little transparency.
An example is DeepSeek, an AI chat developed in China, which, according to digital security experts, stores information indefinitely and may share it with the Chinese government, more specifically with agencies linked to the Chinese Communist Party.
The 5 Types of Information You Should Never Provide
According to Nicole Nguyen’s report, there are five categories of data that should never be shared with artificial intelligence under any circumstances.
Personal Identity Information
Avoid providing social security number, ID card, passport, driver’s license, date of birth, full address, and phone number.
These data are considered highly sensitive and can be used for fraud, identity theft, or even financial scams.
Never provide this information, not even in partial form.
Medical History or Data
Talking to AI about health issues may seem harmless, but any medical data you share may be stored, including symptoms, diagnoses, and treatments.
Besides compromising your privacy, this can be used by companies for ad targeting, violating the right to medical confidentiality.
Banking and Financial Data
Do not share bank account numbers, credit card details, Pix, or any other financial information.
Although it may seem obvious, there are still those who use AI to organize personal finances and end up entering private data, which poses a very high risk of bank identity theft or account breaches.
Corporate and Confidential Work Information
Many professionals turn to tools like ChatGPT to review texts, generate reports, or get ideas for meetings.
However, it is essential to use a separate business account and never share strategic company data.
There are records of cases where companies lost control over sensitive data after using AI to optimize internal tasks.
Logins and Passwords
Never store logins or passwords on AI platforms.
Even if the conversation seems secure, these tools were not developed as password managers and do not guarantee the necessary encryption for such data.
Moreover, the terms of use often disclaim the company’s responsibility for any leaks.
Is There a Way to Delete What You Have Already Sent?
If you realize that you have already shared some inappropriate information with the AI, it is still possible to minimize the damage.
In the case of ChatGPT, users can delete previous data through the settings panel.
Just click on your name in the upper right corner of the screen, go to “Settings,” then “Personalization,” and select “Manage Memories.”
There, it is possible to view what has been stored and permanently delete it by clicking the trash bin icon.
For those who want a “fresh start,” there is also the option to “Clear ChatGPT Memory,” which deletes the entire tool’s history.
Brazil Is Still Discussing Regulation of AI Use
In Brazil, legislation on artificial intelligence is still under development, but the bill aimed at regulating the sector is already advancing in the National Congress.
The proposal seeks to establish clear rules regarding data use, company responsibilities, and user rights.
Digital law specialists warn that the absence of robust legislation increases risks to privacy and security.
The trend is that in the coming years, with the popularization of AIs, these discussions will gain even more momentum.
Good Practices When Using Artificial Intelligence
To ensure safety, specialists recommend adopting good conscious AI usage practices:
Use different profiles for personal and professional use.
Avoid discussing sensitive or private issues with virtual assistants.
Read the terms of use and understand how your data will be stored.
Use AIs only for safe purposes, such as organization, text summarization, or public information inquiries.
Caution Is Never Too Much
The advancement of artificial intelligence is inevitable, but the responsibility for conscious use still lies with the user.
Understanding the limits and risks involved is crucial to avoid future headaches.
After all, in a digital environment where everything can be stored, reviewed, and shared, the best protection remains common sense.
And you, have you ever found yourself sharing more than you should with an artificial intelligence assistant? Share your experience and join the conversation!

Seja o primeiro a reagir!