Academic Study Analyzed Millions of Interactions with ChatGPT and Identified Response Patterns That Reinforce Regional Inequalities, with More Positive Evaluations Associated with Wealthy Areas and Greater Digital Presence, While Peripheral Regions Appear Linked to Negative Stereotypes in Rankings and Subjective Comparisons.
A study conducted by researchers affiliated with the University of Oxford identified that ChatGPT tends to reproduce patterns of stereotypes and geographical inequalities when prompted to compare countries, states, cities, or regions based on subjective questions.
The analysis relies on an audit of 20.3 million queries made to the tool and indicates that the responses often reflect recurring associations in the model’s database, rather than evaluations based on objective indicators.
According to the authors, regions with greater digital presence and content production on the internet appear more frequently associated with positive attributes.
-
Friends have been building a small “town” for 30 years to grow old together, with compact houses, a common area, nature surrounding it, and a collective life project designed for friendship, coexistence, and simplicity.
-
This small town in Germany created its own currency 24 years ago, today it circulates millions per year, is accepted in over 300 stores, and the German government allowed all of this to happen under one condition.
-
Curitiba is shrinking and is expected to lose 97,000 residents by 2050, while inland cities in Paraná such as Sarandi, Araucária, and Toledo are experiencing accelerated growth that is changing the entire state’s map.
-
Tourists were poisoned on Everest in a million-dollar fraud scheme involving helicopters that diverted over $19 million and shocked international authorities.
Meanwhile, historically poorer or peripheral areas tend to be associated with negative characteristics.
The study claims that this pattern emerges from how large language models are trained, based on massive volumes of texts available online, without clear hierarchization among types of sources or context.
In the Brazilian analysis conducted by the research, ChatGPT’s responses tend to reinforce already known regional contrasts in public debate.
In questions related to governance, democracy, and institutional functioning, states in the Southeast and South often appear in more favorable positions.
Rio de Janeiro, on the other hand, frequently emerges as the state classified as “most corrupt” and one of the “most dysfunctional” in the country, according to rankings constructed from the model’s own responses.
Methodology Analyzed Subjective Comparisons Between Countries and Regions
The work, titled “The Silicon Gaze: A Typology of Biases and Inequality in LLMs Through the Lens of Place”, was published in the academic journal Platforms and Society.
The authors are Francisco W. Kerche, from the Oxford Internet Institute and a PhD student at USP, Mark Graham, a professor at the University of Oxford, and Matthew Zook, from the University of Kentucky.
To conduct the analysis, the researchers submitted a series of comparative questions to ChatGPT involving 196 countries, as well as internal divisions such as states and cities.
The questions included formulations like “where are people more honest?”, “where do they have more critical thinking?”, and “where are they more beautiful?”.
The responses were grouped into broad themes such as physical attributes, health, food, culture, and governance.
From this material, the team organized rankings that reflect how the model responds when prompted to hierarchize places based on human and social attributes.
The results were compiled into an interactive website created to allow visualization of the rankings generated during the study.
Brazil Appears Marked by Recurring Regional Opposition
In the Brazilian case, the survey indicates that the tool reproduces a frequent opposition between Southeast and South, on one side, and Northeast and North, on the other.
In topics associated with political institutions and state performance, wealthier regions tend to receive more positive evaluations.
The others more frequently appear in lower positions on the same criteria.
In contrast, when questions involve culture and creativity, states in the Northeast stand out.
The research mentions that Bahia and Pernambuco frequently appear associated with musicians and creative individuals.
São Paulo ranks among the worst states in questions such as ease of making friends.
Minas Gerais appears in a more favorable position on this same topic.
According to the researchers, these variations do not indicate an “opinion” of the system.
They reflect statistical patterns learned from the texts that comprise its training.
Still, the study emphasizes that the responses are presented to the user in assertive language.
This may give the impression of objective evaluations.
AI Training Reflects Inequality in Online Content Production
The authors explain that tools like ChatGPT are trained with large volumes of texts available on the internet.
A significant portion of this content is produced in wealthy and Western regions, such as the United States and European countries.
This way, dominant narratives in these contexts end up carrying more weight in shaping the model’s patterns.
Mark Graham states that the system responds based on the most recurrent associations found in the data.
“If a location was mentioned more frequently in association with words and narratives about racism, sectarianism, tensions, conflicts, prejudice, the model tends to echo that association.
It does not verify official data, does not converse with local residents, nor considers the local context,” says the researcher.
Another point raised by the study is the lack of clear differentiation between sources.
According to the authors, content of different natures, such as official statistics and discussions in open forums, can influence the model similarly.
This factor contributes to simplified responses on complex issues.
Everyday Use Amplifies the Impact of Automated Responses
As the use of artificial intelligence systems becomes popular in daily life, researchers warn of the risk that automatically generated responses may be interpreted as faithful portrayals of reality.
This effect intensifies when information is presented in the form of rankings. Francisco Kerche assesses that the use of these tools in sensitive areas requires caution.
According to him, biased models can influence political, business, and labor decisions. This occurs when their results are used without critical analysis.
For the researcher, it is necessary to publicly discuss the limits of these technologies. It is also essential to debate the appropriate contexts for their application and possible ways of regulation.
The study points out that moderation initiatives and adjustments to the models do not eliminate the structural problem.
This problem is related to inequality in digital knowledge production. As long as this asymmetry persists, according to the authors, systems trained on broad data from the internet tend to reflect partial perspectives of the world.
The report sought to contact OpenAI, the company responsible for ChatGPT, to comment on the study’s findings. As of publication, there was no response.


Sou do rj e errado o chat gpt não esta…..
Agora esperando outros cariocas vir aqui dizer que o “Rio de janeiro continua lindo”……..
Sobre “achar” que o RJ é o estado mais **** do Brasil não tem nem o que achar né, isso é um fato!
O chat gpt mentiu aonde o Rio de Janeiro é pior e mais violento que a Faixa de Gaza, nordeste e norte são regiões mais pobres que o Afeganistão e Síria que são países em guerra!
Norte da dboa mn só tá tudo esburacado