1. Home
  2. / Interesting facts
  3. / Never Say These Things to ChatGPT
Reading time 4 min of reading Comments 0 comments

Never Say These Things to ChatGPT

Written by Alisson Ficher
Published on 06/04/2025 at 15:54
Especialistas alertam: nunca revele dados pessoais a IAs como ChatGPT. Veja quais informações você deve manter longe dessas ferramentas digitais.
Especialistas alertam: nunca revele dados pessoais a IAs como ChatGPT. Veja quais informações você deve manter longe dessas ferramentas digitais.
Seja o primeiro a reagir!
Reagir ao artigo

Sharing Personal Data With Artificial Intelligence Tools Might Seem Harmless, But It Can Be Expensive. Know Which Information Should Never Be Shared in Chats Like ChatGPT, Claude, or Gemini, According to Alerts From Cybersecurity Experts.

Artificial intelligence models have become part of the daily lives of millions of people, but few users are aware of the real risk of sharing sensitive information in chats like ChatGPT.

Despite the warnings issued by the platforms themselves, it is still common for internet users to reveal more than they should during conversations with these tools.

As reporter Nicole Nguyen of the American newspaper Wall Street Journal warned, there are at least five types of data that should never be shared with AI tools.

The risk is not only in individual exposure but also in how companies use this information to train future models, which may include sharing with third parties.

Personal Data Feeds the Models

Applications like ChatGPT, Gemini (Google), and Claude (Anthropic) make it clear in their terms of use that data provided may be used to improve the systems.

In other words, everything the user writes there can become part of the AI training database.

Although many companies claim that the process is anonymized, there are no absolute guarantees about privacy.

“Do not enter confidential information or any data you would not want a reviewer to see,” states the official recommendation from Gemini, from Google, reinforcing the risks of exposure.

The situation becomes even more delicate when it comes to foreign platforms with little transparency.

An example is DeepSeek, an AI chat developed in China, which, according to digital security experts, stores information indefinitely and may share it with the Chinese government, more specifically with agencies linked to the Chinese Communist Party.

The 5 Types of Information You Should Never Provide

According to Nicole Nguyen’s report, there are five categories of data that should never be shared with artificial intelligence under any circumstances.

Personal Identity Information

    Avoid providing social security number, ID card, passport, driver’s license, date of birth, full address, and phone number.

    These data are considered highly sensitive and can be used for fraud, identity theft, or even financial scams.

    Never provide this information, not even in partial form.

    Medical History or Data

      Talking to AI about health issues may seem harmless, but any medical data you share may be stored, including symptoms, diagnoses, and treatments.

      Besides compromising your privacy, this can be used by companies for ad targeting, violating the right to medical confidentiality.

      Banking and Financial Data

        Do not share bank account numbers, credit card details, Pix, or any other financial information.

        Although it may seem obvious, there are still those who use AI to organize personal finances and end up entering private data, which poses a very high risk of bank identity theft or account breaches.

        Corporate and Confidential Work Information

          Many professionals turn to tools like ChatGPT to review texts, generate reports, or get ideas for meetings.

          However, it is essential to use a separate business account and never share strategic company data.

          There are records of cases where companies lost control over sensitive data after using AI to optimize internal tasks.

          Logins and Passwords

            Never store logins or passwords on AI platforms.

            Even if the conversation seems secure, these tools were not developed as password managers and do not guarantee the necessary encryption for such data.

            Moreover, the terms of use often disclaim the company’s responsibility for any leaks.

            Is There a Way to Delete What You Have Already Sent?

            If you realize that you have already shared some inappropriate information with the AI, it is still possible to minimize the damage.

            In the case of ChatGPT, users can delete previous data through the settings panel.

            Just click on your name in the upper right corner of the screen, go to “Settings,” then “Personalization,” and select “Manage Memories.”

            There, it is possible to view what has been stored and permanently delete it by clicking the trash bin icon.

            For those who want a “fresh start,” there is also the option to “Clear ChatGPT Memory,” which deletes the entire tool’s history.

            Brazil Is Still Discussing Regulation of AI Use

            In Brazil, legislation on artificial intelligence is still under development, but the bill aimed at regulating the sector is already advancing in the National Congress.

            The proposal seeks to establish clear rules regarding data use, company responsibilities, and user rights.

            Digital law specialists warn that the absence of robust legislation increases risks to privacy and security.

            The trend is that in the coming years, with the popularization of AIs, these discussions will gain even more momentum.

            Good Practices When Using Artificial Intelligence

            To ensure safety, specialists recommend adopting good conscious AI usage practices:

            Use different profiles for personal and professional use.

            Avoid discussing sensitive or private issues with virtual assistants.

            Read the terms of use and understand how your data will be stored.

            Use AIs only for safe purposes, such as organization, text summarization, or public information inquiries.

            Caution Is Never Too Much

            The advancement of artificial intelligence is inevitable, but the responsibility for conscious use still lies with the user.

            Understanding the limits and risks involved is crucial to avoid future headaches.

            After all, in a digital environment where everything can be stored, reviewed, and shared, the best protection remains common sense.

            And you, have you ever found yourself sharing more than you should with an artificial intelligence assistant? Share your experience and join the conversation!

            Inscreva-se
            Notificar de
            guest
            0 Comentários
            Mais recente
            Mais antigos Mais votado
            Feedbacks
            Visualizar todos comentários
            Alisson Ficher

            Jornalista formado desde 2017 e atuante na área desde 2015, com seis anos de experiência em revista impressa, passagens por canais de TV aberta e mais de 12 mil publicações online. Especialista em política, empregos, economia, cursos, entre outros temas e também editor do portal CPG. Registro profissional: 0087134/SP. Se você tiver alguma dúvida, quiser reportar um erro ou sugerir uma pauta sobre os temas tratados no site, entre em contato pelo e-mail: alisson.hficher@outlook.com. Não aceitamos currículos!

            Share in apps
            0
            Adoraríamos sua opnião sobre esse assunto, comente!x