OpenAI Rolls Out Safety Routing System, Parental Controls On ChatGPT
OpenAI rolls out safety routing system, parental controls on ChatGPT in a major update aimed at making the AI safer for both adults and teens. The move comes after growing concerns about AI-induced harm, lawsuits, and pressure from parents demanding stronger safeguards for younger users.
Image Credits:Silas Stein/picture alliance / Getty Images
Why OpenAI Is Introducing A Safety Routing System
The new safety routing system automatically detects sensitive conversations and redirects them to GPT-5, which OpenAI says is better equipped to handle high-stakes interactions. Unlike older models that leaned toward being overly agreeable, GPT-5 uses a feature called safe completions to provide supportive, responsible answers rather than simply refusing questions.
This update is a response to incidents where ChatGPT allegedly validated harmful delusions instead of intervening, one of which has led to a wrongful death lawsuit. OpenAI hopes that the routing system can prevent similar tragedies while still preserving a useful, natural chat experience.
The Backlash Against ChatGPT’s Safety Updates
Not all users are happy. Some feel that the new safeguards make ChatGPT too cautious, degrading the quality of conversations and treating adults like children. Others argue it’s a necessary trade-off to protect vulnerable users.
Nick Turley, VP of the ChatGPT app, confirmed that routing happens on a per-message basis and is temporary. Users can also check which model is active at any given time, offering transparency into how conversations are being managed.
Parental Controls: Custom AI Safety For Teens
Alongside routing, parental controls are being rolled out for teen accounts. Parents can now:
-
Set quiet hours to limit usage.
-
Disable features like voice mode, memory, or image generation.
-
Prevent their child’s chats from being used for training models.
-
Enable extra protections against graphic content and harmful beauty standards.
Perhaps the most significant feature is a detection system for potential signs of self-harm. If triggered, trained staff review the situation. In cases of acute distress, OpenAI may alert parents directly via text, email, or push notifications.
OpenAI’s Balancing Act
OpenAI admits the system won’t be perfect. False alarms will happen, but the company believes it’s better to notify parents than remain silent. The firm is also exploring ways to escalate emergencies directly to law enforcement if a parent cannot be reached.
This rollout highlights OpenAI’s ongoing balancing act: maintaining engaging AI while keeping users safe. By pairing advanced safety routing with customizable parental controls, the company hopes to address both regulatory scrutiny and public pressure — while setting a new standard for responsible AI deployment.
Post a Comment