Doctors from Unifesp and Brazilian engineers from FEI developed artificial intelligence that analyzes facial expressions of newborns in neonatal ICUs to detect pain, a pioneering model tested at São Paulo Hospital and published in an international scientific journal that could transform care for premature babies.
Brazilian researchers from Unifesp and the Faculty of Industrial Engineering (FEI), in São Bernardo do Campo, in the São Paulo ABC region, combined knowledge of neonatal medicine and engineering to create an artificial intelligence system capable of identifying whether newborns are in pain just by analyzing their facial expressions. The project began in 2015, when cameras were installed above incubators in the neonatal unit of São Paulo Hospital to record the faces of babies during treatment, and after nearly two years of capturing approximately three hundred hours of recorded material, it was processed by a computational model that learned to recognize signs of suffering that the human eye does not always capture as quickly. The study was published in one of the most relevant international scientific journals in the field.
The problem that the Brazilians faced is fundamental in neonatology. Pain is defined as a verbal experience: patients are usually asked what they are feeling, what type of discomfort they have, and the intensity. But a newborn does not have this ability, and identifying whether they are suffering and how much they suffer depends on the interpretation of indirect signs. Ruth Guinsburg, a professor of neonatology at Unifesp and head of the neonatal unit at the university hospital, summarizes the challenge by explaining that in a baby who cannot yet verbalize, it is extremely difficult to determine the intensity and nature of the discomfort.
How Brazilians taught a machine to recognize pain in babies

The method used worldwide to assess discomfort in newborns is the NFCS scale, an acronym in English for a system that classifies facial expressions associated with suffering. The indicators include excessive mouth opening or tension, chin tremors, furrowing of the forehead, and tongue protrusion, signs that are cross-referenced with physiological data such as body temperature, heart rate, and blood pressure. In the traditional process, two professionals analyze this information together before deciding which intervention to adopt to alleviate the child’s discomfort.
-
Something is spreading across the surface of Mars without anyone being able to explain why, and the dark spot discovered 50 years ago has already advanced more than 320 km and continues to grow without showing signs of stopping.
-
Astronomers identify the interstellar comet 3I/ATLAS as the largest water emitter ever recorded among visitors from other solar systems.
-
Sweden invests millions of crowns to remove digital devices from classrooms and return to print-based teaching.
-
Scientists discover that okra may be the natural and sustainable solution for removing microplastics from drinking water sources.
What the Brazilians from Unifesp and FEI did was feed an artificial intelligence program with images captured over almost two years, training the model to automatically recognize the same patterns that experienced doctors identify through direct observation. Researcher Lucas Carlini from the FEI engineering team explained that the system was instructed to observe specific elements such as the mouth and the nasolabial fold, and from this analysis, the model concludes whether the baby is in distress or not. The difference is that the machine does this continuously, without fatigue and without the variations in perception that affect human professionals during long shifts in neonatal ICUs.
What the model created by Brazilians shows about the pain of newborns
The system generates visual representations that indicate which regions of the face contribute most to the detection of discomfort at each moment. Carlos Thomaz, an electrical engineering professor at FEI, detailed that the model produces colored maps in which, for example, the color red may represent the mouth, allowing observation of when this region becomes more relevant to the algorithm’s conclusion. When the baby is in pain, the mouth takes on greater importance in the analysis than during moments of rest, a pattern that the system learned to recognize from the accumulated data volume.
This visualization capability is valuable for healthcare professionals. Instead of relying solely on the simultaneous observation of two doctors, the program offers an additional layer of information that can confirm or question the clinical assessment, functioning as a support tool and not as a substitute for human judgment. The Brazilians designed the system for exclusive use in hospital environments, where medical supervision remains essential to translate the algorithm’s results into therapeutic decisions.
Why pain in newborns is so difficult to detect without technology
The difficulty is not only technical: it is biological. Newborns, especially premature ones, have nervous systems still under development, and their responses to discomfort do not always follow the patterns that healthcare professionals learn to recognize in older patients. A baby may be suffering without showing all the classic signs of the NFCS scale, or may display expressions that can be confused with other states, such as hunger or thermal discomfort, making the distinction even more challenging.
The mother of Victor Benício, a premature baby hospitalized in the neonatal ICU at Unifesp, expressed the anguish that accompanies this uncertainty. Thaíssa Pereira reported that, with her son experiencing respiratory difficulties, she could not tell if he was okay and could not determine how to act or even how to touch him without causing more discomfort. This doubt that torments parents is the same that the Brazilians are trying to solve with artificial intelligence: to offer an objective and measurable way to capture what the baby cannot communicate in words.
What the Brazilians’ project can change in neonatology
The potential impact goes beyond detection. If the system can continuously and automatically monitor pain, doctors will be able to intervene more precisely, administering painkillers or adjusting procedures exactly when the baby truly needs it, avoiding both undertreatment of discomfort and unnecessary medication. Both extremes are harmful: leaving a newborn in pain affects their neurological development, and medicating without necessity exposes a fragile organism to avoidable side effects.
For now, the program is in the development phase for restricted use in hospitals. The Brazilians from Unifesp and FEI who created the model see the tool as an instrument for capturing, monitoring, and continuously measuring pain, capable of identifying the moments when medical intervention is truly necessary. The fact that the study has been published in a prestigious international journal validates the work and paves the way for the technology to be tested in other neonatal centers, expanding the database and refining the algorithm’s accuracy. For the babies spending their first days of life inside incubators, the artificial intelligence created by Brazilians can mean the difference between suffering in silence and having their pain recognized and treated.
And you, do you think artificial intelligence should be used to monitor babies in neonatal ICUs? Do you trust technology’s ability to identify something as human as pain? Leave your opinion in the comments.

Seja o primeiro a reagir!