OpenAI is deploying a new age detection system for ChatGPT that will estimate whether users are minors or adults, marking a shift toward stricter controls for underage accounts.
The feature works by analyzing user behavior and interaction patterns to make an age classification, sorting accounts into under-18 and over-18 categories. Once identified as a minor, users will face safeguards designed specifically for teenagers.
The rollout reflects growing pressure on AI platforms to protect young users from potentially harmful content. Rather than requiring official identification, the system attempts to infer age through how people actually use the service, though OpenAI acknowledges the method will improve over time.
Exactly what safeguards apply to teen accounts remains somewhat vague. The company has indicated it will restrict certain features and content types for users under 18, but specifics about which capabilities get locked down have not been fully detailed publicly.
Age estimation technology carries inherent risks. Misclassification could either restrict adult users unnecessarily or fail to protect minors who slip through. OpenAI says it expects the system's accuracy to refine as it gathers more data, suggesting the initial rollout may be imperfect.
The move puts OpenAI in line with other major platforms that have moved toward age-gating certain services. Unlike traditional verification methods that demand government ID, this approach trades some privacy for user convenience, assuming people won't need to prove their age upfront.
The system's effectiveness will ultimately depend on how well OpenAI can balance protecting teens without creating friction that drives younger users to competitors or workarounds.
Author Emily Chen: "The bet here is that machine learning can guess your age better than you'd want to admit, but the real test comes when teenagers figure out how to game it."
Comments