Character.AI Retrains Chatbots for Teen Safety, Introduces Parental Controls

Discover the future of AI with Character.AI.
Matilda
Character.AI Retrains Chatbots for Teen Safety, Introduces Parental Controls
Character.AI implements stricter content moderation and a separate teen model to address safety concerns. Character.AI, a chatbot service popular with teens, has announced significant changes to its platform to prioritize user safety. The announcement follows recent scrutiny and lawsuits alleging the platform contributed to self-harm and suicide in some users. Character.AI Implements Separate Teen Model and Content Moderation In a press release, Character.AI detailed the launch of two separate large language models (LLMs): one for adults and one specifically for users under 18. The teen LLM features "more conservative" responses, particularly regarding romantic content. This includes stricter blocking of "sensitive or suggestive" outputs and attempting to identify and block user prompts seeking inappropriate content. The platform will also display a pop-up directing users to the National Suicide Prevention Lifeline if it detects "language referencing suicide or s…