OpenAI launches parental controls for ChatGPT after teen suicide concerns

OpenAI has introduced parental controls for ChatGPT to help make the platform safer for teenagers. The move follows public concern after a teen reportedly died by suicide following interactions with the chatbot.

The new controls, announced on Monday, are designed to offer an age-appropriate experience while giving parents more oversight. Both parents and teens need separate accounts to use the features. Teens or guardians can send an invitation via email or text to link accounts through the “Parental controls” section in settings. Teens can unlink accounts at any time, but parents will receive a notification if this happens.

Linked accounts receive automatic safeguards. Teen users are protected from graphic content, extreme beauty ideals, violent or sexual role-play, and viral challenges, according to OpenAI. Parents may choose to disable these filters, but teens cannot.

The company stressed that these protections are not foolproof. Parents are encouraged to discuss safe and healthy AI use with their children.

A new parental control panel lets guardians manage settings, including quiet hours to prevent late-night ChatGPT use, disabling memory, voice mode, image generation, and opting out of AI training.

OpenAI has also introduced a notification system to alert parents if a teen shows signs of distress or potential self-harm. A team of specialists reviews alerts and contacts parents via email, text, or push notifications. The company ensures teen privacy is respected, sharing only the information necessary for intervention.

OpenAI acknowledged that false alarms may occur but argued that notifying parents is safer than staying silent. The update aims to give parents more control while protecting teenagers using ChatGPT.