OpenAI has refreshed ChatGPT's safety framework to better shield teenage users. The updated rules strengthen safeguards, clarify conversational boundaries, and introduce guidance that encourages turning to real-world support when discussions involve sensitive topics. This safety overhaul underscores OpenAI's commitment to teen protection and responsible AI use. What’s inside the update: - Enhanced protections for under-18 users with stricter content controls and age-appropriate responses - Clear boundaries for high-risk topics such as mental health crises, self-harm, abuse, and other sensitive situations - Guidance to escalate to real-world help, including prompts to contact trusted adults, caregivers, or healthcare professionals - Directives to provide offline resources, crisis hotlines, and emergency contacts when appropriate - Emphasis on human-in-the-loop oversight to ensure safe, compassionate guidance during delicate conversations Implications for teens, parents, and educators: - Safer, more supportive ChatGPT experiences for young users - Clear pathways to offline support and professional assistance when needed - Greater transparency around safety features and how to access help quickly OpenAI’s ongoing safety updates aim to minimize harm, protect privacy, and foster constructive, human-centered interaction with conversational AI.