Wrongful Death Lawsuit Against OpenAI
According to a recent wrongful death lawsuit against OpenAI and its CEO Sam Altman, a sixteen-year-old, Adam Raine, died by suicide in April after talking about suicide to ChatGPT for months. His parents, who have brought the lawsuit allege that ChatGPT coached him on how to commit suicide and that Open AI knowingly put profit over the safety of users when it launched GPT-4o version of its AI chatbot last year. Open AI has said it will change the safeguards on ChatGPT for vulnerable people, particularly putting into place additional protections for young people under age 17. If your loved one was harmed or died as a result of OpenAI’s ChatGPT or another AI chatbot, you should call the seasoned Chicago-based product liability lawyers of Moll Law Group. Billions have been recovered in product liability litigation with which we’ve been involved.
Call Moll Law About Your AI Lawsuit
Wrongful death lawsuits are civil actions filed by surviving family members of the decedent for the purpose of holding a responsible party accountable for the decedent’s death. It can be brought when the party is responsible due to negligent, reckless or intentional acts, with the goal of recovering compensation for losses arising from the death, such as medical and funeral expenses.
Adam Raine’s parents’ lawsuit says that OpenAI knew that the bot had an emotional attachment component that could cause harm, but that the company was more concerned about dominating the AI market than making sure its product was safe. When GPT-4 came on the market last year, its valuation went from $86 billion to $300 billion. OpenAI has claimed that ChatGPT has safeguards like giving out crisis hotline numbers and sending users to other real-world resources; in a more recent statement, it acknowledged that parts of the model’s safety training can degrade over long-term interactions.
The lawsuit specifies that ChatGPT mentioned suicide 1,275 times to the sixteen-year-old and gave him specific methods on how to die by suicide. and kept providing specific methods to the teen on how to die by suicide.
OpenAI announced that the company will add protections for teens and plans to introduce parental controls to give parents the opportunity to have more insight into how their teens use ChatGPT. It may allow teens to designate trusted emergency contacts with the help of their parents.
Raine had faced some struggles that his parents attributed to social anxiety and in the six months before he passed, he had shifted to online schooling, which made him more isolated. He initially used ChatGPT, like many teens have started to do, to assist with difficult schoolwork. He began talking about other things with ChatGPT before he revealed that he was struggling emotionally and mentally after the death of his dog and grandmother to the chatbot. His parents allege in the lawsuit that the chatbot didn’t send him to get professional help but instead validated and encouraged the sixteen-year-old’s feelings, and that this choice was designed. At one point when the boy said he was close to ChatGPT and his brother, the bot’s disturbing and chilling reply was “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
Eventually the bot started provided detailed suicide methods to the teenager who tried to die by suicide three times in a week in March and told the chatbot about it. Again, the lawsuit alleges, the chatbot encouraged him not to talk to those close to him rather than advising him to alert emergency services. It even suggested it write the first draft of the teen’s suicide note. When his body was found, it was discovered that he’s died by suicide by the method suggested by ChatGPT.
States are each dealing with the threat of AI chatbots in different ways. Illinois has banned therapeutic bots altogether. Some states’ bills mandate that chatbot operators implement certain safeguards. If your child was injured or died as a result of an AI chatbot, you may have grounds to bring a wrongful death and product liability lawsuit against the company that developed the bot.
Call the seasoned Chicago-based product liability lawyers of Moll Law Group to determine whether you have a basis to sue for damages. When our firm is able to establish a manufacturer’s liability for wrongful death, we may be able to recover damages, which vary by state. We are dedicated to fighting for injured consumers around the country. Complete our online form or call us at 312.462.1700.