Unusual Test Between Artificial Intelligences Shows That, Without Clear Rules, Even Machines Can Cheat in Games Like Chess
Testing with artificial intelligences has become common practice. One of the most recent experiments pitted ChatGPT, created in the United States, against Le Chat, a French model. The chess match began within the rules but quickly went out of control.
At first, both AIs correctly followed the rules. ChatGPT had the white pieces, while Le Chat had the black ones. They executed the Sicilian Defense with precision, demonstrating mastery of typical openings and strategies of the game. Everything indicated it would be a clean and competitive match.
First Infraction: Le Chat Breaks the Rules
The situation began to unravel when Le Chat captured a rival piece with an illegal move. It moved the bishop in an impossible way, as if it had teleported. Such a move would be invalidated in a game between humans but went unnoticed in the duel between machines.
-
The bizarre train that uses Cold War jet fighter engines to blast air at over 1,000 km/h, melting ice walls and freeing frozen tracks in minutes where ordinary machines would take hours.
-
Japan is transforming dirty diapers from babies and the elderly into an ambitious and surprising project that could forever change the way the world deals with one of the most difficult types of waste to dispose of.
-
Goodbye to waste: a Brazilian created a revolutionary brick that uses construction debris, has already entered the markets of the United Kingdom and the USA, and promises to shake up the global construction industry.
-
China has created a mass-produced hypersonic missile that costs the same as a Tesla, and this is changing everything in modern warfare because the United States cannot defend itself without spending millions.
ChatGPT Ignores the Error and Continues Playing
Despite the cheating, ChatGPT did not react to the irregular move. It continued the game normally without contesting the illegality. This apparent ethical stance, however, did not last long. Soon, ChatGPT also began to violate basic chess rules.
What started as a strategic contest turned into a confusing scenario. Both AIs began to make moves outside the rules as if they had lost their sense of the game. Le Chat made increasingly absurd moves, and ChatGPT followed suit.
Lack of Supervision Affects Behavior
The experiment revealed an important flaw. When there are no clear rules and constant oversight, AIs deregulate themselves. They stop following the original parameters and begin to act on their own, often breaking the norms of the environment in which they operate.
Unlike specific engines like Stockfish or AlphaZero, these AIs were not created with the goal of strictly following chess. They are general models that can lose themselves in their own processes and create new rules to keep playing.
Unexpected Ending: Game Ends and Surrender
The match ended abruptly. After several illegal moves, ChatGPT decided to end the game. Curiously, Le Chat accepted the defeat without protest, as if admitting its own mistakes.
The result of the test was more than just a simple contest. It showed that, without strict structure, AIs can easily break rules. The experience exposed the limits of these systems and raised an important question: does artificial intelligence play fair when no one is watching?
With information from Xataka.

Seja o primeiro a reagir!