ChatGPT is rolling out a new age prediction feature on consumer plans designed to help determine whether an account likely belongs to someone under 18, allowing the platform to apply age-appropriate experiences and safeguards.
The update builds on existing protections already in place for users who self-identify as under 18 during sign-up. Those users automatically receive additional safeguards intended to reduce exposure to sensitive or potentially harmful content, while adult users retain access to the full ChatGPT experience within established safety boundaries.
How ChatGPT’s age prediction works
ChatGPT uses an age prediction model to estimate whether an account likely belongs to someone under 18. The system evaluates a combination of behavioral and account-level signals, including how long an account has existed, typical times of day a user is active, usage patterns over time, and a user’s stated age.
According to the company, deploying the model allows it to better understand which signals improve accuracy and to refine the system over time.
Users who are incorrectly placed into the under-18 experience can confirm their age and restore full access through a selfie-based verification process using Persona, a secure identity verification service. Users can also check whether safeguards have been applied to their account by navigating to Settings and then Account.
If ChatGPT’s system flags an account as potentially belonging to a minor, the platform shifts that user into a more restricted experience. This automatically limits access to certain types of material that are considered higher risk for younger users, including:
- Graphic violence or gory content
- Viral challenges that could push risky or harmful behavior
- Sexual, romantic, or violent role play
- Depictions of self-harm
- Content tied to extreme beauty standards, unhealthy dieting, or body shaming
The company said the approach is informed by expert input and academic research related to child development, including differences in risk perception, impulse control, peer influence, and emotional regulation among teens. When age information is incomplete or uncertain, ChatGPT defaults to a safer experience.
In addition to automatic safeguards, parents can further customize a teen’s experience through parental controls. These tools allow parents to set quiet hours, manage features such as memory or model training, and receive notifications if signs of acute distress are detected.
This comes just weeks after OpenAI revealed ChatGPT Health, its new feature to help users understand their health.


