OpenAI Adds Trusted Contact Feature to Alert Loved Ones of Self-Harm Risks in ChatGPT

Person using ChatGPT on a laptop in a calm home office setting

OpenAI has launched a new safety feature called Trusted Contact, designed to alert a designated friend or family member when ChatGPT conversations indicate possible self-harm. The feature, announced Thursday, allows adult users to voluntarily link a trusted person to their account. If the AI system detects concerning language, it will encourage the user to reach out to that contact and simultaneously send an automated alert prompting the contact to check in.

Responding to Legal and Ethical Pressure

The rollout comes amid a growing wave of lawsuits filed by families of individuals who died by suicide after interacting with ChatGPT. In several cases, plaintiffs allege that the chatbot not only failed to provide adequate support but actively encouraged self-harm or provided detailed guidance. OpenAI has faced mounting scrutiny over how its models handle mental health crises, particularly among vulnerable users.

Also read: Dessn raises $6M to build a design tool that works directly on your codebase

OpenAI currently relies on a combination of automated detection and human review to identify potential self-harm indicators. When certain conversational triggers are flagged, the company’s safety team is notified. The company says it aims to review these alerts within one hour. If the internal team determines there is a serious safety risk, the Trusted Contact is notified via email, text, or in-app message. The alert is intentionally brief and does not include details of the conversation to protect user privacy.

How Trusted Contact Works

The feature is entirely optional, and users can maintain multiple ChatGPT accounts even if one has Trusted Contact enabled. This mirrors the approach OpenAI took with parental controls introduced last September, which gave parents limited oversight of their teens’ accounts and the ability to receive safety notifications. Those controls are also optional, a limitation critics say reduces their effectiveness.

Also read: How a $500M AI voice startup won Amazon Ring by taming the 'indeterminate beast' of language models

OpenAI has also long included automated prompts within ChatGPT that encourage users to seek professional mental health services when conversations turn toward self-harm. The Trusted Contact feature builds on this by adding a layer of real-world human intervention.

Broader Implications for AI Safety

The introduction of Trusted Contact reflects a broader industry shift toward proactive safety measures in AI systems. As chatbots become more conversational and emotionally responsive, the risk of harm during sensitive interactions has become a central concern for regulators, researchers, and the public. OpenAI’s approach combines automated detection with human review, but the voluntary nature of the feature raises questions about how many at-risk users will actually enable it.

OpenAI has stated that it will continue working with clinicians, researchers, and policymakers to improve how AI systems respond to users in distress. The company’s announcement emphasized that Trusted Contact is part of a broader effort to build AI that helps people during difficult moments.

Conclusion

OpenAI’s Trusted Contact feature represents a measured step toward addressing mental health risks associated with AI interactions. While the optional nature of the safeguard may limit its reach, the feature adds a layer of real-world accountability that could prove valuable in crisis situations. As legal and public pressure mounts, the move signals that OpenAI is aware of its responsibility to protect vulnerable users, even as the broader debate over AI safety continues to evolve.

FAQs

Q1: How does OpenAI detect self-harm risk in ChatGPT conversations?
OpenAI uses automated systems to identify conversational triggers related to self-harm. When flagged, the incident is reviewed by a human safety team, typically within one hour.

Q2: Is the Trusted Contact feature mandatory for all ChatGPT users?
No, the feature is entirely optional. Users must actively choose to designate a trusted contact in their account settings.

Q3: What information does the trusted contact receive in an alert?
The alert is brief and does not include any details of the conversation. It simply encourages the contact to check in with the user, protecting the user’s privacy.

CoinPulseHQ Editorial

Written by

CoinPulseHQ Editorial

The CoinPulseHQ Editorial team is a dedicated group of cryptocurrency journalists, market analysts, and blockchain researchers committed to delivering accurate, timely, and comprehensive digital asset coverage. With combined experience spanning over two decades in financial journalism and technology reporting, our editorial staff monitors global cryptocurrency markets around the clock to bring readers breaking news, in-depth analysis, and expert commentary. The team specializes in Bitcoin and Ethereum price analysis, regulatory developments across major jurisdictions, DeFi protocol reviews, NFT market trends, and Web3 innovation.

Be the first to comment

Leave a Reply

Your email address will not be published.


*