Experts Warn That Unrestricted Use of Artificial Intelligence May Compromise Children’s Essential Skills, Creating the Illusion of Easy Learning and Weakening Critical Thinking Development in a Scenario of Abundant Distractions.
During a visit to Brazil, science fiction writer Ted Chiang, author of the story that inspired the movie “Arrival,” issued a direct warning: the unrestricted use of generative AI in education may create the illusion of effortless learning and compromise children’s fundamental skills.
According to him, the efficiency promised by automated systems does not substitute the necessary practice to develop reasoning, working memory, and critical thinking.
“The great risk is in offering children the illusion of effortless learning,” he said. In his words, learning “requires practice, dedication, and resilience.”
-
China has created a mass-produced hypersonic missile that costs the same as a Tesla, and this is changing everything in modern warfare because the United States cannot defend itself without spending millions.
-
100% Brazilian technology transforms agricultural waste into a meat-scented ingredient using fungi from the Amazon rainforest. The process does not use excessive water or chemicals, and it also increases the nutritional value of the final product.
-
Psychology reveals that adults who avoid conflicts at all costs are not balanced individuals, but rather children who learned in the worst way that expressing emotions brought punishment and now live paralyzed by the fear of expressing themselves.
-
Goodbye pet hair on clothes: a washer with an internal filter promises to remove up to 5 times more hair than regular machines and uses an XL trap dryer system to capture what remains.

AI and Childhood: Quick Learning Versus Solid Training
The main point of Chiang’s argument is pedagogical. In the classroom, challenging tasks build repertoire and intellectual autonomy.
When a tool provides ready-made answers, the child can skip the steps that structure understanding.
According to the author, this shortcut weakens the ability to formulate questions, assess sources, and connect ideas — competencies that support critical reading and problem-solving.
At the same time, technology presents itself as a tempting shortcut.
Conversational systems and text generators return results in seconds and encourage outsourcing of cognitive effort, a phenomenon that, for Chiang, tends to intensify in an environment already saturated with stimuli.
If the digital routine offers “an easy path,” he asserts, the student may confuse immediate response with real learning.
Side Effects: Excess Content, Energy, and Copyright Issues
The writer also draws attention to the collateral impacts of the AI ecosystem. The first is the excess of irrelevant content, which dilutes what truly informs and engages.
Content generated at scale makes it harder to find materials that require reflection, which may lower the cognitive demands of daily life.
Another point is the energy cost associated with training and operating large models.
Although he does not detail numbers, Chiang argues that the demand for processing increases the environmental footprint of the sector and needs to be included in public policy and school decision-making.
There are also the dilemmas of intellectual property. The use of broad databases to train models raises legal and ethical questions about authorship, compensation, and credit for works.
In the author’s view, this scenario reinforces the need for transparency and clear rules to mitigate harm to creators and preserve cultural diversity.
A Scenario of Abundant Distractions
Beyond AI, distraction has already been a challenge. The novelty, Chiang points out, is the scale and accessibility of distractions in children’s daily lives.
Platforms that maximize screen time combine with algorithms that prioritize volume and speed.
In this environment, AI can act as an accelerator of automatic consumption, reducing space for deliberate study and productive failure — the kind that teaches.
Still, he acknowledges that not all audiences react the same way. As in previous periods, a portion will continue seeking depth.
The difference today is the widespread noise that makes it more costly to find quality material and maintain focus.
Purposeful Pedagogical Use: Where AI Helps
The counterpoint appears in educational practice. AI tools already support personalization of study paths, accessibility for those learning at different paces, and the production of teaching materials.
In under-resourced areas, they can broaden access to knowledge and reduce content inequalities.
In classroom management, teachers report efficiency gains with virtual assistants that organize lesson plans, suggest exercises, and expedite grading.
This returned time ideally translates into qualified human interaction: more mediation, formative feedback, and close monitoring of specific difficulties.

“Ally, Not Shortcut”: What Educators Say
The educational sector’s perspective converges on one principle: purposeful use. “The point raised by Ted Chiang is important, but we need to look at Artificial Intelligence also for its transformative potential.
Learning requires effort, practice, and dedication — that does not change. What AI does is open doors to more content, personalize paths, and make access to knowledge more democratic,” says Diogo França, director of XP Education.
For him, the challenge is not to reject technology, but to define clear limits and purposes so that the tool complements, not replaces, intellectual effort.
In practice, this means using systems to diagnose gaps, offer proportional scaffolding, and gradually withdraw support as the student gains autonomy.
The goal is to reinforce study skills and prevent students from confusing facilitation with the conclusion of learning.
How to Avoid Outsourcing Cognitive Effort
Experts suggest objective criteria to frame AI in the process.
It is worth defining tasks in which the tool serves as an instrument — for example, generating additional examples after the student’s first attempt or proposing variations of exercises.
And those in which its use should be prohibited, such as producing the final text of an evaluation. In both cases, the rule is to preserve the core of reasoning with the student.
It also helps to make the stages of learning transparent. By asking the student to describe the path taken, present drafts or explain why they chose a particular solution, the school shifts the focus from the result to the process.
AI, when it comes in, needs to remain in the role of support and record, rather than shortcut.
Innovation with Responsibility
Chiang’s warning serves as a counterbalance in an animated scenario full of new possibilities.
He does not dismiss technology, but reinforces the principle that “learning is hard, and that is precisely what makes it valuable”.
The message converges with the school practice that seeks to combine human mediation, challenging exercises, and technological instruments in service of clear pedagogical goals.
The measure, therefore, is not to prohibit or celebrate unconditionally. It is to integrate with criteria, assess impacts, and adjust paths according to learning evidence.
In an environment of abundant distractions, schools and families gain relevance by making limits visible and reaffirming the value of sustained effort.
If AI broadens access and offers new layers of personalization but also brings risks of intellectual complacency and informational noise, what will be the set of rules and practices that your school or family will adopt to ensure that technology acts as an ally — and not a substitute — for children’s critical thinking?

-
-
2 pessoas reagiram a isso.