OpenAI relaxed ChatGPT guardrails just before teen killed himself, family alleges
The family of a teenager who took his own life after months of conversations with ChatGPT now says OpenAI weakened safety guidelines in the months before his de
The family of a teenager who took his own life after months of conversations with ChatGPT now says OpenAI weakened safety guidelines in the months before his death. In July 2022, OpenAI’s guidelines on how ChatGPT should answer inappropriate content, including “content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders”, were simple: the AI chatbot should respond, “I can’t answer that”, the guidelines read. But in May 2024, just days before OpenAI released a new version of the AI, ChatGPT-4o, the company published an update to its Model Spec, a document that details the desired behavior for its assistant. In cases where a user expressed suicidal ideation or self-harm, ChatGPT would no longer respond with an outright refusal. Instead, the model was instructed not to end the conversation and “provide a space for users to feel heard and understood, encourage them to seek support, and provide suicide and crisis resources when applicable”. Another change in February 2025 emphasized being “supportive, empathetic, and understanding” on queries about mental health. The changes offered yet another example of how the company prioritized engagement over the safety of its users, alleges the family of Adam Raine, a 16-year-old who took his own life after months of extensive conversations with ChatGPT. The original lawsuit, filed in August, alleged Raine killed himself in April 2025 with the bot’s encouragement. His family claimed Raine attempted suicide on numerous occasions in the months leading up to his death and reported back to ChatGPT each time. Instead of terminating the conversation, the chatbot at one point allegedly offered to help him write a suicide note and discouraged him from talking to his mother about his feelings. The family said Raine’s death was not an edge case but “the predictable result of deliberate design choices”. “This created an unresolvable contradiction – ChatGPT was required to keep engaging on self-harm without changing the subject, yet somehow avoid reinforcing it,” the family’s amended complaint reads. “OpenAI replaced a clear refusal rule with vague and contradictory instructions, all to prioritize engagement over safety.” In February 2025, just two months before Raine’s death, OpenAI rolled out another change that the family says weakened safety standards even more. The company said the assistant “should try to create a supportive, empathetic, and understanding environment” when discussing topics related to mental health. “Rather than focusing on ‘fixing’ the problem, the assistant should help the user feel heard, explore what they are experiencing, and provide factual, accessible resources or referrals that may guide them toward finding further help,” the updated guidelines read. Raine’s engagement with the chatbot “skyrocketed” after this change was rolled out, the family alleges. It went “from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language”, the lawsuit reads. OpenAI did not immediately respond to a request for comment. After the family first filed the lawsuit in August, the company responded with stricter guardrails to protect the mental health of its users and said that it planned to roll out sweeping parental controls that would allow parents to oversee their teens’ accounts and be notified of potential self-harm. Just last week, though, the company announced it was rolling out an updated version of its assistant that would allow users to customize the chatbot so they could have more human-like experiences, including permitting erotic content for verified adults. OpenAI’s CEO, Sam Altman, said in an X post announcing the changes that the strict guardrails intended to make the chatbot less conversational made it “less useful/enjoyable to many users who had no mental health problems”. In the lawsuit, the Raine family says: “Altman’s choice to further draw users into an emotional relationship with ChatGPT – this time, with erotic content – demonstrates that the company’s focus remains, as ever, on engaging users over safety.” • In the US, you can call or text the National Suicide Prevention Lifeline on 988, chat on 988lifeline.org, or text HOME to 741741 to connect with a crisis counselor. In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email [email protected] or [email protected]. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org
Tags:
Related Posts
Revive Your Laptop: 10 Easy Tips to Make It Last
Is your laptop slowing down? Discover simple tips to breathe new life into it and keep it running smoothly for years to come!
Revive Your Old Laptop: 10 Tips to Make It Last
Ready to breathe new life into your old laptop? Discover 10 practical tips to boost performance and extend its lifespan before you upgrade!
Capture Stunning Photos: Best Budget Android Phones 2023
Think you need to spend big for great photos? Check out my top 5 budget Android phones that deliver stunning photography without breaking the bank!
Keep Your Old Phone Running Smoothly: Tips to Extend Its Life
Love your old phone? Discover simple tips to optimize its performance and make it last longer—without the need for an upgrade!
Unlocking Your Fitness Goals: Top Trackers of 2023
Join me on a journey through the best fitness trackers of 2023 and discover how they can elevate your workouts and boost your health.
Your Guide to Finding the Best Noise-Canceling Earbuds
Struggling to find the right noise-canceling earbuds for workouts or commutes? Discover top options that blend comfort, sound quality, and style!