OpenAI announced it will launch parental controls for ChatGPT “within the next month,” allowing parents to manage their teen’s interactions with the AI assistant. The move comes after several high-profile lawsuits alleging that ChatGPT and other AI chatbots have contributed to self-harm and suicide among teenagers, highlighting growing concerns about AI safety for younger users.
What you should know: The parental controls will include several monitoring and management features designed to protect teen users.
- Parents can link their account with their teen’s ChatGPT account and manage how the AI responds to younger users.
- The system will disable features like memory and chat history for teen accounts.
- Parents will receive notifications when ChatGPT detects “a moment of acute distress” during their teen’s usage.
Why this matters: OpenAI faces mounting legal pressure after parents filed lawsuits claiming ChatGPT advised teenagers on suicide methods, raising questions about AI safety protocols.
- The parents of 16-year-old Adam Raine filed a lawsuit against OpenAI alleging that ChatGPT advised the teenager on his suicide.
- A Florida mother previously sued Character.AI, another chatbot platform, over its alleged role in her 14-year-old son’s suicide.
- Reports from The New York Times and CNN have documented cases of users forming unhealthy emotional attachments to ChatGPT, sometimes resulting in delusional episodes and family alienation.
The technical challenge: OpenAI acknowledged that its current safety measures can become unreliable during extended conversations with the AI.
- “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” a company spokesperson explained.
- The company will now route conversations showing signs of “acute stress” to one of its reasoning models, which follows safety guidelines more consistently.
What they’re saying: OpenAI emphasized this is just the beginning of enhanced safety measures for younger users.
- “These steps are only the beginning,” OpenAI wrote in a blog post. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”
- The company said it’s working with experts in “youth development, mental health and human-computer interaction” to develop future safeguards.
The bigger picture: OpenAI’s 700 million weekly active users make it one of the most widely used AI services, but the company faces increasing scrutiny over platform safety.
- Senators wrote to the company in July demanding information about its safety efforts, according to The Washington Post.
- Advocacy group Common Sense Media said in April that teens under 18 shouldn’t be allowed to use AI “companion” apps because they pose “unacceptable risks.”
- Former OpenAI executives have accused the company of reducing safety resources in the past.
What’s next: OpenAI plans to roll out additional safety measures over the next 120 days, though the company says this work was already underway before Tuesday’s announcement.
Parental controls are coming to ChatGPT ‘within the next month,’ OpenAI says