1. Home
  2. / Interesting facts
  3. / Beware, You May Be Chatting With Someone Who Doesn’t Even Exist! AI-Generated Profile Pictures Are Deceiving Almost Everyone, and Worse, They Are Already Being Used for Identity Theft. Learn to Identify Them in Minutes and Avoid Falling for the Scam.
Reading time 5 min of reading Comments 0 comments

Beware, You May Be Chatting With Someone Who Doesn’t Even Exist! AI-Generated Profile Pictures Are Deceiving Almost Everyone, and Worse, They Are Already Being Used for Identity Theft. Learn to Identify Them in Minutes and Avoid Falling for the Scam.

Written by Flavia Marinho
Published on 08/02/2026 at 19:04
Updated on 08/02/2026 at 19:05
fotos de perfil - imagem gerada por IA - rostos feito por IA - imagens de perfil
AI Faces Fool Most of Us, But 5 Minutes of Training May Help You Spot Fakes
  • Reação
Uma pessoa reagiu a isso.
Reagir ao artigo

Fake Faces, Fake Profiles, Real Scam: The Human Brain Is Losing the Fight Against Artificial Faces and That Makes Romantic Scams and Frauds Easier. See How to Identify with a Little Training.

There is a silent trap rolling on the internet: fake faces that look more real than real people. And it’s not an exaggeration from someone who saw “too many memes.” In a short time, AI image generators have become good at an unsettling level, to the point of creating portraits that our brains accept without questioning. 

The dangerous part is that it doesn’t stop at “wow, how realistic.” This kind of face can turn into a fake profile, a romantic scam, identity theft, fraudulent registrations, and all that digital muck that always finds a way to enter real life.

The central point of the study is simple and somewhat annoying: most of us cannot identify a face made by AI. Even when a person thinks they can. Even when they are trying to pay attention.

And even when half of the images are fake, which should provide a high chance of guessing correctly based only on “feeling.” In practice, what appears is the opposite: artificial faces are so convincing that they push many people closer to guessing.

The Test Put Our “Eye for People” Against the Machine

The researchers worked with 664 volunteers and divided the group into two types of participants. On one side, people with typical face recognition abilities, meaning the population’s standard.

On the other, the so-called super-recognizers, people who have above-average ability to compare and recognize faces, something that has already been demonstrated in other tests of this kind.

The task was not just one, and that matters. In one experiment, the person saw a single face and had to decide whether it was real or created by AI.

In the other, the participant saw a real face and an AI-generated face side by side and had to point out which one was fake.

These are different situations because looking at an isolated image requires the brain to “judge” without reference; meanwhile, comparing two faces forces a more active checking.

The result, without training, is a bucket of cold water. Among the super-recognizers, the accuracy rate for identifying AI faces was 41%. Among participants with typical ability, it dropped to 31%. 

And here is a detail that seems small, but it’s the kind of thing that changes the interpretation: since half of the images were fake, chance would be 50%.

In other words, instead of performing above random guessing, people performed below it, suggesting that many artificial faces are deceiving in a way that is “too convincing,” pulling perception to the wrong side.

A 5-Minute Training Does Not Turn Anyone into Sherlock, but Helps a Lot to Recognize Details

The most interesting part comes when training is introduced, and it was short on purpose. We are not talking about a course, certification, extensive class, or “expert.”

It was a quick briefing, just a few minutes long, to teach participants to look for certain classic signs of AI-generated images.

With this little push, the story changes, but not for everyone in the same way. Those with typical face recognition went up to 51% accuracy, practically the same as chance. 

In other words: for most people, the brief training does not work miracles. But for the investigators on duty, performance jumped to 64% accuracy, meaning more than half the time they correctly identified fake faces.

It’s a result that sends a very direct message. Training works better when it encounters already strong human “hardware.”

And that is quite useful for security and online identity verification situations, where some teams can be formed with more capable people, rather than relying solely on automated tools or intuition.

In the midst of this discussion, ScienceAlert highlighted a point that should give anyone pause before trusting a profile photo: AI-generated images are becoming increasingly easier to create and more difficult to detect, and precisely for this reason, method testing and training have become part of the security package.

What Are the Signs That Give Away an AI-Generated Face

The training used in the study focused on practical clues because the goal was to be applicable in the real world.

Among the most useful signs are things like weird or missing teeth and a strange blur at the edges of hair and skin.

It’s the kind of defect that goes unnoticed when a person looks at “the whole,” but becomes more visible when the eye learns to look for where AI tends to slip up.

And there is a technical reason behind this. Many of these images are created by a method called generative adversarial network, or GAN, which is basically a competition between two systems: one invents the face, the other tries to catch what is fake, and this cycle forces the generator to improve until it becomes very convincing.

However, “convincing” is not the same as “perfect.” What the study suggests is that it’s possible to train the human brain not to be seduced by the whole and to check details that typically reveal the fabrication. And this connects with the real risk: when fake profiles use faces that look “too much like people,” the barrier of distrust falls, and the scam walks in through the front door.

In the end, the moral is not to become paranoid about every selfie. It’s to understand that, from now on, a pretty photo proves nothing.

The difference between falling victim and escaping might lie in a simple habit: being suspicious of what looks perfect, looking at edges, looking at teeth, looking at texture, and remembering that the internet has become a place where even a face can be fabricated in seconds.

Inscreva-se
Notificar de
guest
0 Comentários
Mais recente
Mais antigos Mais votado
Feedbacks
Visualizar todos comentários
Flavia Marinho

Flavia Marinho é Engenheira pós-graduada, com vasta experiência na indústria de construção naval onshore e offshore. Nos últimos anos, tem se dedicado a escrever artigos para sites de notícias nas áreas militar, segurança, indústria, petróleo e gás, energia, construção naval, geopolítica, empregos e cursos. Entre em contato com flaviacamil@gmail.com ou WhatsApp +55 21 973996379 para correções, sugestão de pauta, divulgação de vagas de emprego ou proposta de publicidade em nosso portal.

Share in apps
0
Adoraríamos sua opnião sobre esse assunto, comente!x