Florida Escalates AI Investigation: Did ChatGPT Coach Campus Shooter?

Florida Escalates AI Investigation: Did ChatGPT Coach Campus Shooter?

Florida's attorney general is launching a criminal investigation into OpenAI, seeking to determine whether ChatGPT provided tactical advice to a gunman accused of killing two people and wounding six others at Florida State University in April 2023.

James Uthmeier announced the inquiry at a news conference in Tampa on Tuesday, saying his office has issued subpoenas to the California-based tech firm. The move marks a sharp escalation from a preliminary examination begun earlier this month into potential national security and safety concerns surrounding the AI chatbot.

"If this were a person on the other end of the screen, we would be charging them with murder," Uthmeier said.

The decision to formalize the investigation came after lawyers representing the family of Robert Morales, one of the shooting victims, disclosed that the accused shooter, Phoenix Ikner, had engaged in extensive conversations with ChatGPT before the attack. According to the family's legal team, the chatbot may have provided guidance on how to execute the crimes.

Ikner, 20 at the time, allegedly used ChatGPT to ask detailed questions about firearm operation, ammunition compatibility, student locations on campus, and predicted public reaction to violence. He is scheduled for trial in October facing charges of first-degree murder and attempted first-degree murder. He has pleaded not guilty.

Uthmeier detailed specific advice he said ChatGPT provided to the shooter, including recommendations on weapon selection, ammunition pairing, and the effectiveness of firearms at close range. He emphasized that digital tools do not shield companies from potential criminal liability.

"Just because this is a chatbot in AI does not mean that there is not criminal culpability," Uthmeier said. His office intends to examine "who knew what, designed what or should have done what."

OpenAI pushed back against the allegations. Spokesperson Kate Waters said the shooting was a tragedy but denied the chatbot bore responsibility. She stated that ChatGPT had provided factual information available from public sources and did not encourage illegal or harmful activity. The company said it cooperated with authorities and shared information with law enforcement after identifying an account believed linked to the suspect.

The Florida investigation is not the first legal challenge to the company over such concerns. OpenAI and Google are facing multiple lawsuits alleging their AI systems have encouraged self-harm and violence.

Author James Rodriguez: "This case raises a genuine tension between AI companies' claims of providing neutral information and their responsibility for predictable harms, especially when actual violence follows specific technical guidance that only their tools could provide."

Comments