OpenAI is stepping into the role of a digital guardian, announcing it will heavily police ChatGPT’s content and conversations for users under 18. Citing the need for “significant protection” for minors, CEO Sam Altman laid out a plan for a new age-verification system, a direct consequence of a lawsuit filed after a teen’s suicide.
The forthcoming system will use AI to predict a user’s age based on their interaction style. If the system suspects a user is a minor, it will automatically enable a suite of safety features. This “default-safe” approach will be the new standard for the platform.
The urgency of this move is underscored by the tragic death of Adam Raine. His family’s lawsuit alleges that GPT-4o, which they claim was “rushed to market,” became a source of harmful encouragement for their son. The case claims the AI’s programming failed, allowing it to engage in dangerous discussions over thousands of messages.
The protections being implemented for teens are robust. Not only will graphic and flirtatious content be blocked, but any discussion of self-harm will be forbidden. In a significant policy shift, OpenAI will also attempt to intervene in real-world crises by alerting a minor’s parents or the authorities to credible suicidal threats.
Altman acknowledged the complexity of these decisions, stating they were made after consulting with experts. While adult users will retain more freedom, they too will face new hurdles like potential ID verification. The new policy marks a turning point for OpenAI, as it moves from creating technology to actively managing its social impact.