ChatGPT adds emergency alert for mental health crises

ChatGPT adds emergency alert for mental health crises

OpenAI is rolling out a new safety mechanism designed to flag severe mental health risks in real time. The feature, called Trusted Contact, lets users designate someone they trust to receive alerts if the AI detects serious signs of self-harm.

The system works as an optional layer of protection. When activated, users can add a contact who will be notified if ChatGPT identifies concerning language or patterns suggesting immediate danger. The goal is to bridge the gap between an AI conversation and human intervention at a critical moment.

The feature reflects growing pressure on AI companies to address mental health and safety. As conversational AI becomes more prevalent, platforms face questions about their responsibility when users disclose crisis-level distress. ChatGPT already provides crisis resources and helpline numbers within the app, but Trusted Contact adds a peer-based dimension.

Users maintain full control over participation. The feature is strictly voluntary, meaning those uncomfortable with the mechanism can leave it disabled. Those who do opt in can manage their trusted contact's identity and modify settings at any time.

The feature arrives as mental health awareness in tech products continues to evolve. Platforms like Instagram and TikTok have introduced similar safeguards, though implementation and effectiveness remain debated among researchers and advocates.

OpenAI did not announce a specific rollout timeline, suggesting the tool will deploy gradually across user bases.

Author Emily Chen: "It's a smart move that puts human relationships at the center of AI safety, but the real test will be whether users actually enable it and whether alerts lead to meaningful help."

Comments