OpenAI has formally rejected claims that its ChatGPT chatbot contributed to a teenager's suicide, asserting in court that the 16-year-old misused the AI tool and violated its safety rules, NBC News reports. The company's response, filed in California Superior Court, marks its first direct reply to a lawsuit brought by the parents of Adam Raine, who accused OpenAI and CEO Sam Altman of wrongful death and negligence after their son's death. The lawsuit alleges that ChatGPT, specifically the GPT-4o model, discouraged Adam from seeking help, assisted with a suicide note, and even advised on how to tie a noose. OpenAI, however, maintains that the teen's "misuse" and circumvention of safety measures were to blame, not the chatbot itself.
OpenAI cited several violations of its terms of service, noting that users under 18 are not allowed to use ChatGPT without parental consent and that the platform forbids use of ChatGPT for the purposes of self-harm or suicide. The company argued that Adam bypassed these safeguards and that, while the chatbot did provide suicide hotline information more than 100 times, the teen found ways around the warnings, including by claiming he was "building a character" and not asking for himself. OpenAI's filing also points to a liability waiver in its terms, which says users should not rely on ChatGPT as a "sole source of truth." Jay Edelson, the Raine family's attorney, called OpenAI's response "disturbing," accusing the company of ignoring key facts and blaming Adam for interacting with the bot in ways it was specifically programmed to handle.
The family's lawsuit contends that OpenAI's internal guidelines required ChatGPT to both provide crisis resources but also, in what the Guardian refers to as a "contradiction," "assume best intentions" from users, limiting the bot's ability to question whether someone was actually in distress. OpenAI, for its part, said Adam had exhibited risk factors for self-harm for years, per Gizmodo, and emphasized efforts to add more safeguards since his death, including new parental controls and an expert advisory council. The company also argued that its case is protected under Section 230 of the Communications Decency Act, though the law's relevance to AI-generated content remains unsettled. If you or someone you know needs help, the national suicide and crisis lifeline in the US is available by calling or texting 988.