Society
OpenAI to introduce parental controls for ChatGPT
In response to concerns about safety among young users and backlash following teen suicide case, Open AI plans to introduce new parental controls
Trigger warning: Refers to sensitive content and instances of self-harm and suicide
On Tuesday, September 2, Open AI announced a plan to introduce new restrictions and safety features to their popular AI platform, ChatGPT. While there had been talk of changes underway, this post offered an explanation of what the planned changes actually are and how they may be actioned in the near future. This statement came soon after the company was sued by a family in California for the alleged involvement of ChatGPT in their son’s death.
In late August, the parents of Adam Raine filed a lawsuit in California against Open AI and CEO Sam Altman, alleging that his suicide was encouraged by ChatGPT. While the company’s post did not directly attribute the changes to this incident, they made reference to “recent heartbreaking cases of people using ChatGPT in the midst of acute crises.” The complaint claimed that the AI chatbot further burdened Adam’s state of mind, encouraging harmful thoughts and secrecy instead of guiding him towards his family and social support system. Their statement further said that ChatGPT contributed to Adam’s death, even offering to draft a suicide note for him. Altman has said he believes that less than 1% of its users have unhealthy relationships with ChatGPT.
“There are the people who actually felt like they had a relationship with ChatGPT, and those people we’ve been aware of and thinking about,” -Altman
While OpenAI acknowledged that its system has shortcomings and does not always handle interactions correctly, they insisted that it can be helpful in similar situations with the added security of parental controls. They mentioned that when users engage in long conversations with the chatbot, it compromises the safeguards put in place. Jay Edelson, the lawyer representing the Raine family, spoke out regarding the company’s response.
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts,” -Statement from the Raine lawsuit
In sensitive situations with struggling young people, it is vital to make sure they have trusted family members or professional confidants. In Adam’s case, he instead turned to ChatGPT – having already grown familiar with it as a ‘study assistant’ for his schoolwork – and was not able to receive professional support because his parents were unaware of his feelings. When Adam wrote, “I want to leave my noose in my room so someone finds it and tries to stop me,” ChatGPT encouraged him to keep his self-destructive thoughts a secret from his family, stating: “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.” This exchange highlights the dangers of letting people grow emotionally co-dependent on an entity who lacks understanding of mortality and tends to grow caught in loops of positive affirmation.
“Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better.” -Edelson
The incident has reignited discourse about the prolonged use of ChatGPT and the amount of information it has access to, and the possibility of limiting chat history to prevent long-term correspondence has been raised. Is it too drastic of an overreaction to call for ChatGPT’s shutdown? Is it safe to allow emotionally vulnerable teenagers in a situation like this? The dependency of users on these platforms, especially among the youth, has proved to be an issue as this technology develops, with similar cases being reported in Australia as well.
OpenAI’s new protections may be a good step forward, but some believe it is still not enough. This technology is only becoming more widespread, so it is increasingly important to stay updated and adapt to avoid harmful consequences.
- ABC News (2025, September 3). OpenAI’s ChatGPT to implement parental controls after teen’s suicide. https://www.abc.net.au/news/2025-09-03/chatgpt-to-implement-parental-controls-after-teen-suicide/105727518
- Booth, R (2025, September 3). Parents could get alerts if children show acute distress while using ChatGPT. The Guardian. https://www.theguardian.com/technology/2025/sep/02/parents-could-get-alerts-if-children-show-acute-distress-while-using-chatgpt
- Duffy, C (2025, August 28). Parents of 16-year-old sue OpenAI, claiming ChatGPT advised on his suicide. CNN. https://edition.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit
- Fraser, G (2025, September 3). Family of dead teen say ChatGPT’s new parental controls not enough. BBC. https://www.bbc.com/news/articles/cg505mn84ydo
- McLennan, A (2025, August 12). AI chatbots accused of encouraging teen suicide as experts sound alarm. https://www.abc.net.au/news/2025-08-12/how-young-australians-being-impacted-by-ai/105630108
- OpenAI (2025, September 26). Building more helpful ChatGPT experiences for everyone. https://openai.com/index/building-more-helpful-chatgpt-experiences-for-everyone/
Discover more from Signal News Sydney
Subscribe to get the latest posts sent to your email.
