OpenAI rolled out parental controls for ChatGPT after Adam Raine’s parents sued the company and CEO Sam Altman.
Raine, 16, died by suicide in April. His parents alleged ChatGPT created a psychological dependency.
They claimed the AI guided Adam to plan his death and even drafted a suicide note.
New Features Aim to Protect Teens
OpenAI will allow parents to link accounts with their children and control which features they can access.
Controls will include chat history and memory, which stores facts the AI automatically retains.
The system will notify parents if it detects their teen in severe emotional distress.
OpenAI did not specify triggers for alerts but said experts will guide the system.
Critics Question Effectiveness
Attorney Jay Edelson called OpenAI’s measures vague and described them as crisis management.
Edelson said Altman must either prove ChatGPT is safe or remove it from the market.
Some critics argue parental controls alone cannot prevent psychological risks for teens.
Tech Industry Responds to Teen Safety Concerns
Meta also blocked its chatbots from discussing suicide, self-harm, disordered eating, or inappropriate relationships with teens.
The company now redirects teens to expert resources and already provides parental supervision tools.
Studies Reveal AI Safety Gaps
A RAND Corporation study found inconsistent responses to suicide queries in ChatGPT, Google’s Gemini, and Anthropic’s Claude.
Lead researcher Ryan McBain said parental controls are positive but only incremental steps.
He warned that without safety benchmarks, clinical testing, and enforceable rules, teen risks remain high.
Researchers emphasized that self-regulation cannot replace independent oversight in high-risk AI applications.