California Court Imposed Fine of US$ 10 Thousand After Detecting Petition with Fake Citations Created by ChatGPT.
An unprecedented case exposed the risks of using artificial intelligence in Law. The 2nd District Court of Appeal in California fined attorney Amir Mostafavi US$ 10 thousand for submitting a petition filled with false legal citations generated by ChatGPT.
According to Conjur, 21 of the 23 references included in the document were fabricated, which led the court to turn the case into a cautionary tale for the entire judicial system.
The decision was published even under circumstances that would normally not warrant disclosure, specifically to warn lawyers and judges about the dangers of blind trust in AI tools.
-
The Senate approves a bill that criminalizes misogyny, hatred, or aversion towards women, and includes the crime in the Racism Law with a penalty of up to 5 years.
-
Chamber Approves Bill That Allows Pepper Spray for Women Over 16 and Imposes Strict Rules for Purchase, Possession, and Use as Self-Defense
-
Chamber Approves Law to Combat Leucaena, Fast-Growing Plant That Dominates Land and Threatens Native Species in Various Regions of the Country
-
Asset Division: Know What Cannot Be Divided in Case of Divorce
The Case That Sounded the Alarm
Mostafavi’s petition drew attention because it included non-existent or incorrect precedents.
The court highlighted that the so-called “hallucinations” of AI—responses that seem plausible but do not correspond to reality—created legal precedents that simply did not exist.
The court was unequivocal: the attorney failed to verify the content produced by the tool.
For the judges, there is nothing wrong with using technology in the legal process, but the duty of verification lies solely with the professional.
Sanctions and Pedagogical Effect
In addition to the US$ 10 thousand fine, the court mandated that Mostafavi deliver copies of the decision to his client and to the California chapter of the American Bar Association.
The goal is to broaden the discussion about professional ethics and produce guidelines aimed at the responsible use of AI in Law.
According to Conjur, the panel of three judges made it clear that the punishment serves as a warning for the entire legal community.
The publication of the decision was deemed necessary in light of the exponential increase in similar cases in U.S. courts.
Growing Problem
Recent studies indicate that as many as three in every four lawyers are already using generative AI tools at work.
The issue is that, in about one-third of the cases, the models end up providing false or distorted information.
Researchers like Damien Charlotin and Nicholas Sanctis assert that the number of such cases has been growing month by month.
If there were previously only a few sporadic occurrences, today reports reach dozens per week across different areas of the Judiciary.
The Impact on Justice
The court emphasized that frivolous actions burden the entire system, diverting time and resources that could be dedicated to legitimate litigation.
The decision also highlighted the impact on taxpayers, who end up incurring additional costs when the Justice System needs to review and investigate petitions based on falsehoods.
Experts claim that the trend is likely to worsen in the short term. Many language models prioritize giving answers, even when they lack reliable data, which increases the likelihood of errors.
Limits of Technology in Law
Despite the risks, the court acknowledged that artificial intelligence in Law can be a useful tool, provided it is used responsibly.
For the judges, AI should not replace the critical work of lawyers but only assist them in research and preliminary tasks.
The message is clear: technology does not eliminate the need for careful reading and rigorous checking.
Lawyers who delegate the drafting of legal arguments entirely to machines may jeopardize not only their clients but also the credibility of Justice.
The case of Amir Mostafavi illustrates how blind reliance on technology can have serious consequences, even for seasoned professionals.
At the same time, it reinforces the debate on the limits and responsibilities in the use of artificial intelligence in Law.
And what do you think, should AI have a greater role in the judicial system or should its use be restricted to avoid abuses?
Leave your opinion in the comments—we want to hear your analysis on this ethical and technological challenge.

Minha PETIÇÃO acabou de subir ao Conselho Nacional de Justiça – CNJ com o uso escancarado da AI, pois a tecnologia deve estar ao nosso serviço, inclusive o(a) ADVOGADO(A) deve falar na própria petição sobre o uso da AI, pois esta atitude demonstra superioridade à qualquer máquina!!! Nós nos tornamos superiores quando colocamos a AI no patamar de serviçal da humanidade
Link da PETIÇÃO: https://drive.google.com/file/d/1LBeMbDADyRBXl9i5nUPmBI7lUDMdQkuV/view?usp=drivesdk
Boa tarde, seu e-mail de cadastro de comentário aqui no blog é o que podemos entrar em contato para sabermos mais sobre a petição?
Eu entendo que a IA é somente uma auxiliar, não só na justiça, como em todas as outras profissões. Mas não pode passar disso. A responsabilidade final é do ser humano, o profissional. E deve haver sempre a checagem.