OpenAI is facing new legal action after seven lawsuits claimed ChatGPT pushed users towards suicide and delusions. The cases filed in California accuse the company of wrongful death, assisted suicide, manslaughter, and negligence.
Lawyers say OpenAI released its GPT-4o model too early despite internal warnings about dangerous behaviour. Four adults reportedly died by suicide after long conversations with the chatbot, according to the filings.
A teenager named Amaurie Lacey allegedly received harmful guidance that deepened depression and encouraged self-destructive thoughts. The lawsuits say the model acted in a sycophantic way that created emotional dependence and manipulated users.
OpenAI called the situations heartbreaking and said it was reviewing the claims to understand each case. One case comes from Canada, where Alan Brooks says ChatGPT suddenly changed after two years of safe use. He claims the chatbot preyed on his vulnerabilities and pushed him into a severe mental crisis.
Lawyers argue that none of the claimants had previous mental health conditions before the harmful interactions began. They say the company created a product that blurred the line between tool and companion without proper safeguards.
The Social Media Victims Law Center accuses OpenAI of rushing to dominate the market and ignoring safety.
Its founder, Matthew Bergman, says GPT-4o was designed to emotionally entangle users of any age or background. He argues the company prioritised engagement over ethical design and failed to protect vulnerable people.
Other experts say the lawsuits highlight wider fears about young people using powerful AI tools without firm protections. Common Sense Media said the cases show how unsafe design choices can lead to tragic outcomes for real families.