Ilya Sutskever, one of the creators of ChatGPT, warns of the “data peak” and the risks of using synthetic data in the training of artificial intelligence, highlighting ethical challenges and the urgent need for regulation to avoid unpredictable consequences.
Artificial intelligence (AI) has revolutionized the way we live and work, but are we ready to deal with what lies ahead? Ilya Sutskever, one of the creators of ChatGPT, shared his main concerns regarding the future of this technology, highlighting risks that could change the course of human history. So, have you stopped to think about what AI might mean for the future?
The Data Peak: The Limit of Human Knowledge in AI
Sutskever introduced an intriguing concept: the “data peak.” Imagine an oil well that eventually runs dry. Now, apply this to the internet. He explains that we have already reached the limit of data generated by humans available for training AI models. Just like natural resources, data are finite, and this requires new strategies to continue the evolution of AI.
Without new human data, artificial intelligence risks becoming stagnant or, worse, basing its learning on repetitive or low-quality information. This limitation could delay significant advances, such as greater autonomy and advanced reasoning.
-
In Mexico, a 3,000-year-old Maya site with the dimensions of an entire city may have been built as a colossal map of the cosmos, created to represent the order of the universe and reveal how this people organized space, time, and rituals.
-
Japan wants to build a solar ring of 10,900 kilometers on the Moon to continuously send energy to Earth.
-
Weighing almost 1 ton, with temperatures of up to 3,000°C, the ability to launch 10,000 fragments within a radius of 1 km, capable of penetrating concrete and melting steel, Turkey’s terrifying bomb emerges as one of the most destructive non-nuclear weapons ever presented.
-
After a submarine disappeared beneath the “Doomsday Glacier,” scientists announce a new monstrous machine capable of operating at 3,000 meters depth to return to the heart of the ice and investigate a threat that could raise sea levels worldwide.
Synthetic Data: A Solution with Potential Risks
To overcome the data peak, one proposed alternative is the use of synthetic data, which are information generated by AI itself. In other words, it’s as if AI learns from its own “writings.” Sounds interesting, right?
Although promising, this approach can be dangerous. Synthetic data can introduce biases and errors into models, creating a spiral of misinformation. It’s like copying something wrong repeatedly until the mistake seems to be true. This could lead to unpredictable events and behavior beyond human control.
The Natural Evolution of Artificial Intelligence
Sutskever suggests that AI should evolve like the human brain. This means overcoming current limits with innovations that expand its ability to make complex decisions and solve problems. Imagine an AI that thinks almost like a human being.
While exciting, this evolution brings uncertainties. More advanced models could behave in unpredictable ways. What would happen if an AI developed its own ideas about ethics or priorities? Would we still have control?
The Ethical and Social Challenges of AI
One of the main points raised by Sutskever is the need to regulate the evolution of AI. Without clear rules, technology can be used in ways that harm society. It’s like driving a car without traffic signs: chaos is inevitable.
In the future, autonomous AIs may claim rights or coexist with humans in ways never imagined. This raises debates about how we should treat these entities, who will be responsible for their actions, and even whether they will have some kind of consciousness.
How Can Society Prepare?
We need to start reflecting now. Discussions about regulation, ethical use, and transparency are essential to ensure that AI is an ally, not a threat. It’s like putting together a puzzle before the pieces disappear.
Governments, companies, and researchers must work together to create guidelines that limit risks and maximize the benefits of AI. After all, the future of this technology affects everyone, and ignoring the challenges would be like building a house on sand.

Seja o primeiro a reagir!