1. Home
  2. / Science and Technology
  3. / Brain-inspired technique teaches models to doubt wrong answers before making critical decisions
Reading time 3 min of reading Comments 0 comments

Brain-inspired technique teaches models to doubt wrong answers before making critical decisions

Published on 04/05/2026 at 10:08
Be the first to react!
React to this article

Trustworthy AI gained a new brain-development-inspired approach, created by researchers from the Korea Advanced Institute of Science and Technology, which trains neural networks with random noise before real data to reduce overconfidence and better align certainty and accuracy.

Researchers from the Korea Advanced Institute of Science and Technology presented a new training strategy to make Trustworthy AI by reducing overconfidence in incorrect predictions. The approach, published in Nature Machine Intelligence, uses an initial step with random noise before learning with real data.

The proposal addresses a recurring problem in modern artificial intelligence systems. Many models can indicate a response and also assign a degree of confidence to it, but this index does not always correspond to the real chance of accuracy.

This mismatch can lead AI systems to present incorrect answers with high certainty. In high-risk applications, such as medical diagnostic tools or autonomous vehicles, an incorrect prediction with exaggerated confidence can lead to serious consequences.

Training Starts with Random Noise

The technique created by Jeonghwan Cheon and Se-Bum Paik includes a brief warm-up phase before the main training. In this stage, the neural network receives entirely random data and arbitrary outputs, with no significant relationship between input and response.

After this warm-up, the model proceeds to common training with task-specific databases it needs to learn. The logic is to allow the network to develop a more realistic estimate of uncertainty before dealing with real patterns.

The strategy was described as inspired by neurodevelopment. It seeks to align predictive confidence with response accuracy, a concept known as uncertainty calibration.

Trustworthy AI Depends on Well-Calibrated Uncertainty

AI trustworthy requires that the confidence indicated by the system matches the real probability of accuracy. When this relationship fails, the model may appear confident even when faced with unknown, ambiguous, or out-of-training-pattern data.

The researchers pointed out that widely used initialization methods in deep learning may be among the primary sources of overconfidence. The new warm-up stage seeks to correct this problem without requiring additional pre-processing or post-processing.

In tests, models trained with this initial phase showed a lower tendency for overconfident responses. They produced lower confidence scores when incorrect, but maintained adequate confidence levels when responses were correct.

Model Better Recognizes Unknown Inputs

The method also improved the ability of neural networks to identify unknown inputs. This point is important because AI systems frequently encounter situations different from the samples used in training.

The calibration worked both in contexts within the expected distribution and in out-of-distribution situations. This means the model performed better when dealing with data similar to known data and also with less familiar inputs.

The practical advantage lies in the simplicity of its application. The approach does not rely on complex engineering or extra steps after training, only on the inclusion of a short preliminary session with random noise and random labels.

Applications May Involve High-Risk Areas

The technique can still be refined and applied to a wider variety of artificial intelligence models. This expansion will allow its potential to be evaluated in more real-world scenarios and in systems with different levels of complexity.

The work can contribute to the development of more secure Trustworthy AI systems capable of better estimating the probability of accuracy of their predictions. This advance is especially relevant in environments where incorrect decisions can cause serious impacts.

Clinical tools, autonomous cars, and other critical applications depend on models that not only get things right but also know when they might be wrong. The new approach indicates that teaching AI to deal with uncertainty before real data can be a way to reduce overconfidence and expand Trustworthy AI.

Click here to access the study.

Sign up
Notify of
guest
0 Comments
most recent
older Most voted
Built-in feedback
View all comments
Fabio Lucas Carvalho

Journalist specializing in a wide variety of topics, such as cars, technology, politics, naval industry, geopolitics, renewable energy, and economics. Active since 2015, with prominent publications on major news portals. My background in Information Technology Management from Faculdade de Petrolina (Facape) adds a unique technical perspective to my analyses and reports. With over 10,000 articles published in renowned outlets, I always aim to provide detailed information and relevant insights for the reader.

Share in apps
0
I'd love to hear your opinion, please comment.x