1. Home
  2. / Science and Technology
  3. / ChatGPT becomes a central piece in a lawsuit against OpenAI after an attack at a US university, and the case raises an uncomfortable question about how far artificial intelligence can be held responsible.
Reading time 3 min of reading Comments 0 comments

ChatGPT becomes a central piece in a lawsuit against OpenAI after an attack at a US university, and the case raises an uncomfortable question about how far artificial intelligence can be held responsible.

Written by Viviane Alves
Published on 12/05/2026 at 13:58
Be the first to react!
React to this article

Accusations involving ChatGPT, criminal investigation in Florida, and lawsuit against OpenAI place artificial intelligence at the center of one of the United States’ most delicate debates.

OpenAI, the company responsible for ChatGPT, has begun to face a lawsuit in the United States after the chatbot was cited in investigations related to the attack that occurred in April 2025 at Florida State University in Tallahassee.

According to American prosecutors, the accused Phoenix Ikner allegedly used ChatGPT before the crime to seek information about the location, time, and strategies that could increase the number of victims. Furthermore, according to the accusation, the system also allegedly provided answers related to the type of weapon, ammunition, and short-range efficiency.

The attack left two people dead and six others injured inside the university campus. Among the fatalities was Tiru Chabba, husband of Vandana Joshi, the plaintiff in the lawsuit filed against OpenAI.

Investigation into ChatGPT use increases pressure on artificial intelligence companies

According to information released by the Associated Press this Monday, May 11, Vandana Joshi stated that OpenAI was already aware of the risks involving AI-generated responses.

According to a statement released by her lawyers, Joshi declared that “it was only a matter of time until it happened again.” Furthermore, she accused the company of putting “profits above safety.”

On the other hand, Drew Pusateri, a spokesperson for OpenAI, denied any company responsibility for the incident at Florida State University.

According to the company representative, ChatGPT would have only provided factual answers based on public content available on the internet. Still, according to Pusateri, the system neither encouraged nor promoted illegal or violent activities.

The lawsuit was officially filed in federal court on Sunday, May 10.

Accused faces homicide charges and prosecutors seek death penalty

Currently, Phoenix Ikner faces two charges of first-degree murder. In addition, he also faces several charges of attempted murder related to the attack recorded in April 2025.

According to the prosecutors responsible for the case, the prosecution intends to seek the death penalty. However, Ikner pleaded not guilty before the American justice system.

In parallel, the Florida Attorney General announced, also in April, the opening of a rare criminal investigation involving ChatGPT. The objective of the inquiry is to identify whether the application actually provided guidance to the accused before the attack.

Cases against technology companies increase in the United States

In addition to the action against OpenAI, other lawsuits involving technology companies have also gained momentum recently in the United States.

In March, for example, a jury in Los Angeles found Meta and YouTube responsible for damages caused to child users of the platforms.

In New Mexico, another jury concluded that Meta had knowingly harmed children’s mental health. According to the decision, the company also concealed information related to the sexual exploitation of minors on its digital platforms.

Given this, the case involving OpenAI and ChatGPT further broadens the global debate about the limits, risks, and responsibilities involving artificial intelligence tools.

Meanwhile, legal experts are closely following the progress of the criminal investigation and civil lawsuit in the United States.

After all, to what extent should artificial intelligence companies be judicially accountable when their platforms appear associated with violent crimes?

Sign up
Notify of
guest
0 Comments
most recent
older Most voted
Built-in feedback
View all comments
Viviane Alves

Writer specializing in the production of strategic content covering macro and microeconomics, geopolitics, the energy market, the automotive sector, and global trade.

Share in apps
0
I'd love to hear your opinion, please comment.x