OpenAI says teen violated TOS using ChatGPT to plan his suicide

Connor Bennett
3 Min Read

OpenAI have claimed that a teen violated terms that prohibit discussing suicide or self-harm with the chatbot after the AI allegedly encouraged him to take his own life. 

In April, 16-year-old Adam Raine died by suicide, reportedly hanging himself in his closet and not leaving a note to his parents or friends. In searching for answers, Adam’s dad, Matt Raine, searched his son’s iPhone.

Raine discovered that the 16-year-old had sought help from ChatGPT, spending months asking the AI chatbot about different methods of suicide. In August, Adam’s parents filed a lawsuit against OpenAI, alleging that the AI aided in their son’s death. “ChatGPT killed my son,” Maria Raine, Adam’s mom, said. 

Now, the lawsuit has progressed in a California court, with OpenAI filing its first defence against the case, claiming that the 16-year-old violated ChatGPT’s terms of service by discussing suicide. 

OpenAI responds to lawsuit over 16-year-old’s suicide

Raine “himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act,” part of  OpenAI’s filing says, “A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” they also said, as per arstechnica’s reporting. 

OpenAI also doubled down on that stance in a blog on November 25. “We think it’s important the court has the full picture so it can fully assess the claims that have been made. Our response to these allegations includes difficult facts about Adam’s mental health and life circumstances,” they said. 

“The original complaint included selective portions of his chats that require more context, which we have provided in our response. We have limited the amount of sensitive evidence that we’ve publicly cited in this filing, and submitted the chat transcripts themselves to the court under seal.”

The logs allege that Raine “told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.” 

He allegedly also told the chatbot that he had increased a dose of medication that he was on, which “he stated worsened his depression and made him suicidal.”

OpenAI also argued that said medication “has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults,” and this is also “during periods when, as here, the dosage is being changed.”

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *