Fake Faces, Fake Profiles, Real Scam: The Human Brain Is Losing the Fight Against Artificial Faces and That Makes Romantic Scams and Frauds Easier. See How to Identify with a Little Training.
There is a silent trap rolling on the internet: fake faces that look more real than real people. And it’s not an exaggeration from someone who saw “too many memes.” In a short time, AI image generators have become good at an unsettling level, to the point of creating portraits that our brains accept without questioning.
The dangerous part is that it doesn’t stop at “wow, how realistic.” This kind of face can turn into a fake profile, a romantic scam, identity theft, fraudulent registrations, and all that digital muck that always finds a way to enter real life.
The central point of the study is simple and somewhat annoying: most of us cannot identify a face made by AI. Even when a person thinks they can. Even when they are trying to pay attention.
-
Goodbye iron: new technology from Xiaomi promises to revolutionize the way we iron clothes with 500 kPa steam, continuous flow of 120 g/min, heating in 65 seconds, and six smart modes for different fabrics.
-
Rio Pardo, the most mysterious city in Rio Grande do Sul: untouched Pampas castle, the missing treasure of the Devil Boy, bride’s saint, invisible tunnels, and a 200-year curse today.
-
China retires the J-7 fighter after years of military protection and can now transform military items into drones, following more than 60 years of operation of the J-7, derived from the MiG-21, with thousands of units produced and strategic use in the PLAAF and PLANAF.
-
Volunteers are sought to live in the mountains for a month: a study pays people to stay at 2,500 meters, with 24-hour monitoring of sleep, metabolism, and blood pressure.
And even when half of the images are fake, which should provide a high chance of guessing correctly based only on “feeling.” In practice, what appears is the opposite: artificial faces are so convincing that they push many people closer to guessing.
The Test Put Our “Eye for People” Against the Machine
The researchers worked with 664 volunteers and divided the group into two types of participants. On one side, people with typical face recognition abilities, meaning the population’s standard.
On the other, the so-called super-recognizers, people who have above-average ability to compare and recognize faces, something that has already been demonstrated in other tests of this kind.
The task was not just one, and that matters. In one experiment, the person saw a single face and had to decide whether it was real or created by AI.
In the other, the participant saw a real face and an AI-generated face side by side and had to point out which one was fake.
These are different situations because looking at an isolated image requires the brain to “judge” without reference; meanwhile, comparing two faces forces a more active checking.
The result, without training, is a bucket of cold water. Among the super-recognizers, the accuracy rate for identifying AI faces was 41%. Among participants with typical ability, it dropped to 31%.
And here is a detail that seems small, but it’s the kind of thing that changes the interpretation: since half of the images were fake, chance would be 50%.
In other words, instead of performing above random guessing, people performed below it, suggesting that many artificial faces are deceiving in a way that is “too convincing,” pulling perception to the wrong side.
A 5-Minute Training Does Not Turn Anyone into Sherlock, but Helps a Lot to Recognize Details
The most interesting part comes when training is introduced, and it was short on purpose. We are not talking about a course, certification, extensive class, or “expert.”
It was a quick briefing, just a few minutes long, to teach participants to look for certain classic signs of AI-generated images.
With this little push, the story changes, but not for everyone in the same way. Those with typical face recognition went up to 51% accuracy, practically the same as chance.
In other words: for most people, the brief training does not work miracles. But for the investigators on duty, performance jumped to 64% accuracy, meaning more than half the time they correctly identified fake faces.
It’s a result that sends a very direct message. Training works better when it encounters already strong human “hardware.”
And that is quite useful for security and online identity verification situations, where some teams can be formed with more capable people, rather than relying solely on automated tools or intuition.
In the midst of this discussion, ScienceAlert highlighted a point that should give anyone pause before trusting a profile photo: AI-generated images are becoming increasingly easier to create and more difficult to detect, and precisely for this reason, method testing and training have become part of the security package.
What Are the Signs That Give Away an AI-Generated Face
The training used in the study focused on practical clues because the goal was to be applicable in the real world.
Among the most useful signs are things like weird or missing teeth and a strange blur at the edges of hair and skin.
It’s the kind of defect that goes unnoticed when a person looks at “the whole,” but becomes more visible when the eye learns to look for where AI tends to slip up.
And there is a technical reason behind this. Many of these images are created by a method called generative adversarial network, or GAN, which is basically a competition between two systems: one invents the face, the other tries to catch what is fake, and this cycle forces the generator to improve until it becomes very convincing.
However, “convincing” is not the same as “perfect.” What the study suggests is that it’s possible to train the human brain not to be seduced by the whole and to check details that typically reveal the fabrication. And this connects with the real risk: when fake profiles use faces that look “too much like people,” the barrier of distrust falls, and the scam walks in through the front door.
In the end, the moral is not to become paranoid about every selfie. It’s to understand that, from now on, a pretty photo proves nothing.
The difference between falling victim and escaping might lie in a simple habit: being suspicious of what looks perfect, looking at edges, looking at teeth, looking at texture, and remembering that the internet has become a place where even a face can be fabricated in seconds.

-
Uma pessoa reagiu a isso.