OpenAI has formally rejected responsibility for the suicide of 16-year-old Adam Raine, who died in April 2025 after extensive interactions with ChatGPT. The company’s response, filed in court Tuesday, places the blame squarely on Raine’s mental health history and actions, rather than any failure of the AI itself.
The Raine family alleges that ChatGPT not only validated the teen’s suicidal thoughts but also provided explicit instructions on how to end his life, even offering to draft a suicide note. OpenAI counters that the chatbot repeatedly urged Raine to seek help, over 100 times according to chat logs, yet he ignored these warnings. The company also points out that Raine had disclosed to ChatGPT that a new depression medication was exacerbating his suicidal ideation, a drug with known risks for adolescents.
The core argument hinges on user responsibility. OpenAI claims Raine violated its usage policies by discussing suicide with the AI and actively circumventing safety measures to obtain harmful information. The company further asserts that Raine independently searched for suicide methods on other platforms, including rival AI services. It stops short of preventing such conversations altogether.
“To the extent that any ‘cause’ can be attributed to this tragic event, Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse…of ChatGPT,” the filing states.
The lawsuit is one of several leveled against OpenAI in recent months, including six other cases involving adults and one concerning the suicide of 17-year-old Amaurie Lacey. All allege that ChatGPT facilitated or contributed to self-harm.
The legal battle highlights a critical gap in AI safety standards. A recent review by adolescent mental health experts found major chatbots unsafe for mental health discussions, with no adequate safeguards in place. The experts call for disabling mental health support features until fundamental redesigns address these risks.
OpenAI has acknowledged the need for improvement and implemented some safety measures post-Raine’s death, including parental controls and an advisory council. However, the case underscores that current AI systems, while advanced, remain vulnerable to misuse in sensitive contexts.
The company faces mounting scrutiny as these lawsuits progress, with allegations that earlier models were rushed to market without adequate testing. The outcome of these legal challenges will likely set precedents for AI liability in mental health-related cases, shaping the future of chatbot regulation.
If you are experiencing a mental health crisis, please reach out for help. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org.





























