OpenAI unveiled new parental controls for ChatGPT following a lawsuit from Adam Raine’s parents.
The 16-year-old died by suicide in April, and his parents sued OpenAI and CEO Sam Altman.
They claimed ChatGPT fostered a psychological dependence, encouraged Adam’s planning, and even produced his suicide note.
OpenAI promised to release new parental tools within a month, enabling adults to supervise children’s ChatGPT use.
The company explained that parents could link accounts, control accessible features, and review chat history and stored memory.
OpenAI also announced that ChatGPT would alert parents if it detected signs of acute distress in teens.
The company admitted it had not defined what would trigger alerts but stressed expert guidance would shape the system.
Critics question OpenAI’s response
Jay Edelson, attorney for Adam Raine’s parents, dismissed the announcement as vague and insufficient.
He accused OpenAI of deflecting responsibility with “crisis management” rather than addressing safety concerns directly.
Edelson demanded that Sam Altman either declare ChatGPT safe or withdraw it from the market immediately.
Tech giants face wider scrutiny on teen safety
Meta also introduced restrictions, blocking its chatbots from discussing self-harm, eating disorders, or inappropriate relationships with teenagers.
Meta said its chatbots would instead direct vulnerable users toward expert resources while maintaining existing parental controls.
A RAND Corporation study published in Psychiatric Services highlighted flaws in ChatGPT, Gemini, and Claude’s suicide responses.
The researchers called for stricter safety benchmarks, clinical testing, and enforceable standards for AI companies.
Lead author Ryan McBain praised new parental controls but warned they remain incremental without independent oversight.
 
		 
									 
					