1. Home
  2. / Science and Technology
  3. / 100 times less energy consumption: the innovation that could solve the huge energy crisis of AI.
Reading time 5 min of reading Comments 0 comments

100 times less energy consumption: the innovation that could solve the huge energy crisis of AI.

Published on 30/03/2026 at 16:18
Seja o primeiro a reagir!
Reagir ao artigo

With AI energy consumption already estimated at 415 terawatt-hours in the United States in 2024, research from Tufts University presents a neuro-symbolic system that reduces spending by up to 100 times, cuts training to 34 minutes, and increases the success rate in complex tasks

The advancement of artificial intelligence has been rapidly increasing energy consumption, but a proof of concept developed by researchers at Tufts University’s School of Engineering indicates that this scenario may change.

The new hybrid approach to AI, based on neuro-symbolic systems, has shown potential to use up to 100 times less energy than conventional models, in addition to demonstrating greater accuracy in certain tasks.

In the United States, AI systems and data centers consumed about 415 terawatt-hours of electricity in 2024. This volume represents more than 10% of the country’s total energy production, and the expectation is that this number will double by 2030, reinforcing the debate about the viability of expanding the capacity of these systems without uncontrollably increasing energy expenditure.

The research was conducted in the laboratory of Matthias Scheutz, a full professor at the Karol Chair of Applied Technology.

The work focuses on neuro-symbolic AI, a line that combines traditional neural networks with symbolic reasoning in an attempt to make processing more efficient and reliable.

Energy consumption grows with the expansion of AI

The pressure caused by the energy consumption of artificial intelligence is already emerging as one of the main challenges in the field. In a scenario marked by the expansion of increasingly larger models and more robust computational infrastructures, the increase in electrical demand has begun to be treated as an obstacle to the sustainable evolution of these systems.

The proposal tested at Tufts arises precisely in this context. The team developed a proof of concept for an approach capable of drastically reducing energy consumption without sacrificing performance and, in some cases, even increasing the accuracy rate of the tasks performed.

Scheutz and his collaborators work with robots that interact directly with people, which differentiates the focus of the study from large screen-based language models like ChatGPT and Gemini. Instead, the team focuses on what are called visual-linguistic-action models, known by the acronym VLA.

These systems expand LLMs by incorporating vision and movement. As a result, robots can interpret information captured by cameras and language and perform physical actions, such as moving wheels, arms, or fingers.

How the neuro-symbolic proposal works

In conventional VLA models, seemingly simple tasks can require a lot of processing and still result in failures. One example cited by the team is stacking blocks, which requires the robot to scan the environment, identify the position, shape, and orientation of objects, and execute the received instruction without compromising the stability of the assembled structure.

In this process, errors can arise for various reasons. Shadows can impair perception, blocks can be positioned incorrectly, and the final construction can become unstable to the point of collapsing, highlighting the limits of systems that rely heavily on trial and error.

The logic of these failures is similar to problems already known in machine learning systems. Just as robots can make mistakes in physical tasks, chatbots can provide incorrect or fabricated responses, such as inventing legal cases or generating images with unrealistic features, like extra fingers.

Symbolic reasoning has been pointed out as a more efficient alternative to address this type of limitation. It allows the system to operate based on general rules and abstract concepts, such as shape and center of mass, favoring more reliable planning with fewer unsuccessful attempts throughout learning.

Scheutz explained that VLA models, like LLMs, operate based on statistical results obtained from large training sets with similar scenarios. He stated that this can lead to errors, whereas a neuro-symbolic VLA can apply rules that reduce the number of attempts and errors during learning and reach a solution much more quickly, in addition to significantly decreasing training time.

Results surpass conventional models

In experiments conducted with the classic Tower of Hanoi puzzle, the neuro-symbolic VLA system achieved a success rate of 95%. In standard VLA models, the rate was 34%, a difference that highlighted the superiority of the hybrid method in a structured long-duration task.

When the system was subjected to a more complex version of the puzzle, which had not appeared during training, the performance remained above conventional models. At this stage, the success rate was 78%, while traditional systems failed in all attempts.

The difference was also evident in the time required for training. The neuro-symbolic system needed only 34 minutes, while a standard VLA model took more than a day and a half to complete this phase.

Energy consumption fell in the same proportion as the reduction in time. During training, the hybrid system used only 1% of the energy required by conventional models and, in operation, consumed only 5% of the energy.

Scheutz compared this scenario to the operation of widely known systems, such as ChatGPT and Gemini. In the evaluation presented, these models attempt to predict the next word or action in a sequence, which can result in inaccuracies, hallucinations, and energy expenditure disproportionate to the task performed.

As an example, he pointed out that the AI summary displayed at the top of a Google search page can consume up to 100 times more energy than generating the search results. The comparison was used to illustrate how the current architecture of many systems can elevate energy costs even in relatively simple activities.

Pressure on data centers and the future of technology

With the growing demand for AI and its expansion for industrial use, companies have been accelerating the construction of increasingly larger data centers. These structures can require hundreds of megawatts of energy, far exceeding the needs of many small towns.

Within this scenario, researchers argue that the LLM and VLA systems currently in use, despite their rapid adoption, may not provide a sustainable or reliable foundation in the long term. The assessment presented is that hybrid neuro-symbolic AI has the potential to function as a more efficient and reliable alternative.

The expectation is that this model will help reduce the growing pressure on energy resources without interrupting the advancement of artificial intelligence.

The study was published on February 22, 2026, in arXiv.

Inscreva-se
Notificar de
guest
0 Comentários
Mais recente
Mais antigos Mais votado
Feedbacks
Visualizar todos comentários
Fabio Lucas Carvalho

Jornalista especializado em uma ampla variedade de temas, como carros, tecnologia, política, indústria naval, geopolítica, energia renovável e economia. Atuo desde 2015 com publicações de destaque em grandes portais de notícias. Minha formação em Gestão em Tecnologia da Informação pela Faculdade de Petrolina (Facape) agrega uma perspectiva técnica única às minhas análises e reportagens. Com mais de 10 mil artigos publicados em veículos de renome, busco sempre trazer informações detalhadas e percepções relevantes para o leitor.

Share in apps
0
Adoraríamos sua opnião sobre esse assunto, comente!x