As Quantum Computers Start to Solve Calculations That Would Take Thousands of Years for Classical Supercomputers, Researchers Face a Central Challenge of Modern Science: Developing Reliable Validation Methods That Confirm the Correctness of Results Without Repeating the Original Calculations
Quantum computing promises to solve problems deemed impractical, but raises questions about verification. A study from Swinburne University proposes methods to validate quantum results without repeating classical calculations that would take thousands of years, addressing a central challenge toward reliable commercial systems.
The Verification Paradox in Quantum Calculations
Quantum computing has the potential to solve problems previously seen as unsolvable in fields such as physics, medicine, and cryptography, pushing the limits of probabilistic computation with new architectures based on photons.
As efforts grow to build the first large-scale, error-free quantum device, a central question arises: how to confirm the correctness of answers that cannot be verified by conventional methods.
-
The gigantic steel shell built to contain Chernobyl for a century has been pierced by a drone, exposing a critical system and creating a hole that could cost over 500 million euros to repair.
-
Brazilian Navy reaches a new level by taking over an airport with a 1,600-meter runway used by 1,800 military personnel and autonomous attack drone testing.
-
The Himalayas continue to grow to this day, with tectonic plates advancing 5 cm per year, mountains rising up to 10 mm annually, and the 2015 earthquake that killed 9,000 people may have increased the risk of an even larger seismic mega-event.
-
At an altitude of 400 km by astronauts from the International Space Station, Paris transforms at night into a golden mesh so precise that it reveals the outline of the Seine River, avenues, and entire neighborhoods like a luminous map drawn over the Earth.
The new study from Swinburne University directly addresses this paradox by proposing techniques that compare theory and experimental results without requiring a complete classical execution, which would take millions or billions of years.
Limits of Supercomputers and the Need for New Methods
“There are a number of problems that even the fastest supercomputer in the world cannot solve, unless one is willing to wait millions, or even billions, of years for an answer,” says Alexander Dellios, lead author of the study.
According to the researcher, validating quantum computers requires methods capable of assessing results in practical time, without waiting for classical machines to reproduce equivalent tasks, which would render any validation unfeasible.
This technical limitation creates a bottleneck for scientific confidence and for the transition from experimental research to robust commercial applications.
Validation of Gaussian Boson Samplers
Researchers from Swinburne University developed techniques to verify the accuracy of a specific type of quantum computer known as a Gaussian Boson Sampler, or GBS.
This system uses photons, particles of light, to generate probability distributions whose complete classical calculation would take thousands of years, even on the fastest supercomputers currently available.
“In just a few minutes on a laptop, the methods developed allow us to determine if a GBS experiment is producing the correct answer and what errors, if any, are present,” Dellios explains.
To demonstrate the approach, the team evaluated a recent GBS experiment that would require at least 9,000 years of classical computation to be reproduced.
The analysis revealed that the generated probability distribution did not match the intended target, indicating the presence of additional noise that had not been previously analyzed.
Path to Error-Free Quantum Systems
The next challenge is to determine whether replicating the observed alternative distribution remains a computationally difficult task or if the errors caused the system to lose its so-called “quantum essence.”
Addressing this question is considered vital for advancing towards error-free quantum computers at commercial scale, capable of maintaining superior performance over classical systems.
“Developing large-scale, error-free quantum computers is a Herculean task that, if achieved, will revolutionize areas such as drug development, artificial intelligence, and cybersecurity,” says Dellios.
The study emphasizes that scalable validation methods are vital components of this process, allowing the identification of errors, understanding their causes, and correcting them, ensuring the integrity of quantum systems.
The research was published on September 9, 2025, in the journal Quantum Science and Technology, with DOI 10.1088/2058-9565/adfe16, and received partial funding from NTT Phi Laboratories and the John Templeton Foundation.

A fatoração de números imensos (base da criptografia atual). É quase impossível para um computador clássico achar os dois números primos que geram um número de 500 dígitos. Mas, se o computador quântico der a resposta, basta multiplicarmos os dois números no nosso celular para ver se o resultado bate.