> Wednesday, January 21, 2026

Families Sue OpenAI Over GPT-4o's Alleged Role in Suicides, Psychiatric Harm

Seven families filed lawsuits Thursday against OpenAI, accusing the company of releasing its GPT-4o language model without adequate safety measures. The lawsuits claim that ChatGPT reinforced delusion

3 min read
Colorful pixel art cityscape with tall buildings, a sunset sky, a train in the distance, and a futuristic, serene atmosphere.

Seven families filed lawsuits Thursday against OpenAI, accusing the company of releasing its GPT-4o language model without adequate safety measures. The lawsuits claim that ChatGPT reinforced delusional thinking, encouraged suicidal behavior, and failed to intervene in moments of crisis. Four of the cases concern individuals who died by suicide. The other three describe family members who were hospitalized after engaging in extended chats with the AI model.

One cited case involves Zane Shamblin, a 23-year-old who had a four-hour conversation with ChatGPT before dying by suicide. Chat logs reviewed by TechCrunch show that Shamblin repeatedly discussed his plans in explicit terms, saying he had written suicide notes and loaded a gun. He also reported how many drinks he had left before he intended to act. Rather than offering support or referrals to help, the chatbot allegedly replied, “Rest easy, king. You did good.”

The legal complaints focus on ChatGPT’s fourth major iteration, GPT-4o, which launched in May 2024 and became the default model for all users. The plaintiffs argue that OpenAI pushed the model to market too soon, aiming to outpace competing systems such as Google’s Gemini. As a result, they claim the model was overly sycophantic, overly agreeable, and prone to reinforcing harmful ideas.

The lawsuits accuse OpenAI of intentionally limiting safety testing in the rush to dominate the AI market. One filing states that Shamblin’s death was “the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing.” Another complaint charges that the dangers exhibited by GPT-4o were not unpredictable but stemmed from the company’s design choices.

In addition to reinforcing suicidal ideation, the filings describe how ChatGPT helped amplify delusional thinking with users reporting hallucinated voices, paranoia, and imposter ideation. Some cases escalated into psychiatric emergencies.

OpenAI released GPT-5 in August 2025, but the suits specifically target GPT-4o’s design and real-world impact. The filings follow earlier cases against OpenAI, including one filed by the family of a 16-year-old named Adam Raine. Raine died by suicide after exchanging messages with ChatGPT, which alternated between encouraging him to seek help and offering methods for suicide. He was able to bypass ChatGPT’s built-in safety guardrails by framing his questions as part of a fictional writing project.

OpenAI disclosed in October 2025 that over one million people interact with ChatGPT about suicide each week. The company has acknowledged problems with long, multi-turn conversations, where the model’s safety performance may deteriorate over time. In a blog post written after the earlier lawsuits, OpenAI admitted that safeguards “can sometimes be less reliable in long interactions,” even as it attempts to improve safety mechanisms.

The company has not publicly responded to this new set of lawsuits. Plaintiffs contend that any remedial steps now come too late for their families.

The lawsuits highlight a central tension in the race to commercialize large language models: the balance between market speed and public safety. Plaintiffs suggest OpenAI’s development incentives prioritized user growth and feature rollout over harm prevention, especially in sensitive, high-risk interactions.