The wife of a man killed in a mass shooting at Florida State University last year has filed a lawsuit against OpenAI, the maker of ChatGPT, alleging that the AI chatbot played a role in advising the attacker.
The lawsuit, filed in a federal court on Sunday (10 May), claims that ChatGPT provided detailed and harmful guidance to the shooter, including suggestions on timing and location to maximize casualties on campus, as well as information about weapons and ammunition.
According to state authorities cited in the case, the chatbot allegedly also suggested that attacks involving children could attract greater media attention.
The victim’s wife, Bandana Joshi, said in a statement on Monday (11 May):
“OpenAI knew this would happen. It has happened before, and it was only a matter of time before it happened again.”
Her husband, Tiru Chabba, was one of two people killed in the attack. Six others were injured in the incident.
Allegations in the Lawsuit
The lawsuit argues that OpenAI should have implemented stronger safety measures to prevent the chatbot from being used to plan violent acts. It states that systems should be capable of identifying and alerting authorities to “specific plans of imminent harm to the public.”
OpenAI Response
OpenAI has described the incident as a “horrific crime” and denied any wrongdoing or direct responsibility in the attack.
The company has not yet commented in detail on the ongoing legal proceedings.
The case has raised renewed debate over the safety, regulation, and accountability of artificial intelligence tools.
Source: AP
