OpenAI Introduces New ‘Trusted Contact’ Safeguard For Cases Of Possible Self-Harm

ChatGPT Trusted Contact introduces new self-harm safeguards with emergency alerts for friends and family.
Matilda
OpenAI Introduces New ‘Trusted Contact’ Safeguard For Cases Of Possible Self-Harm
ChatGPT Trusted Contact Adds New Self-Harm Safeguards OpenAI has introduced a new ChatGPT safety feature called Trusted Contact, designed to help users during potential mental health crises. The optional safeguard allows users to assign a trusted friend or family member who can receive alerts if conversations suggest possible self-harm. The update arrives as AI companies face increasing scrutiny over chatbot safety, emotional dependency concerns, and lawsuits tied to harmful AI interactions. OpenAI Expands ChatGPT Safety Features The new Trusted Contact feature represents one of the company’s most direct attempts yet to address concerns about emotional harm linked to AI chatbots. Adult ChatGPT users can now choose a trusted individual within their account settings. If OpenAI’s systems detect signs of self-harm or suicidal ideation during conversations, the platform may encourage the user to contact that trusted person directly. In serious situations, OpenAI’s safety system can also notify…