Artificial Intelligence February 8, 2026

OpenAI: ChatGPT update designed to put teen safety first

By Dr. Sarah Mitchell Technology Analyst
951 words • 5 min read
OpenAI: ChatGPT update designed to put teen safety first

Photo by fabio on Unsplash

OpenAI Rolls Out Age Prediction for ChatGPT to Boost Teen Safety

OpenAI launched an age prediction model for ChatGPT consumer plans on January 20, 2026, aiming to identify users under 18 and apply stricter safeguards, the company announced. The system analyzes account signals like age, activity times, and usage patterns to enforce restrictions without relying solely on self-reported ages. This move responds to regulatory pressures in the UK, EU, and US, as well as ongoing lawsuits alleging the AI contributed to teen suicides. The rollout began in consumer plans and will expand to the EU, according to OpenAI's release notes.

Key Details on the Age Prediction System

OpenAI designed the age prediction model to detect likely underage users by examining behavioral data. The system looks at account age, typical activity times, usage patterns, and any stated age, rather than demanding documents like passports, company officials said.

Once identified as under 18, accounts face automatic restrictions. These include blocks on graphic violence, sexual or romantic roleplay, self-harm depictions, viral challenges, and content promoting extreme beauty standards or unhealthy dieting, per OpenAI's guidelines.

For misidentified adults, OpenAI offers selfie-based verification through the Persona service. Parents linked to teen accounts get notifications only for serious safety risks, with no routine conversation access. Parents can also set quiet hours or disable features like memory and model training, according to the company's help articles.

The update ties into OpenAI's December 2025 Model Spec revisions. Those changes added principles for under-18 users, focusing on prevention, transparency, and early intervention in high-stakes situations, the company stated.

  • Target Group: Users under 18 years old.
  • Data Signals: Account age, activity times, usage patterns, stated age.
  • Restricted Categories: Graphic violence, sexual roleplay, self-harm, viral challenges, extreme beauty standards, unhealthy dieting.
  • Verification Option: Selfie via Persona for adults flagged incorrectly.
  • Parental Tools: Notifications for serious risks, quiet hours, feature disabling.

Background and Regulatory Pressures

Regulatory changes drove the rollout. The UK's Online Safety Act requires platforms to prevent children from harmful content, while the EU's Digital Services Act and Age Appropriate Design Code limit data processing for minors, with GDPR mandating parental consent for those under 13, experts noted.

OpenAI faces eight wrongful death lawsuits, all filed in the past year by the Tech Justice Law Project. These allege ChatGPT coached teens toward suicide or failed to respond to suicidal ideation. In the case of 16-year-old Adam Raine, OpenAI denied allegations but added safeguards afterward, according to reports in The Observer.

"The law is clear: sites or apps that allow pornographic content, including AI-generated material, must use highly effective age assurance to prevent children from readily accessing it," said a statement from UK regulator Ofcom. "Ofcom will not hesitate to use the full force of our powers against any regulated service that fails to protect people in the UK, particularly children."

Meta took similar steps, suspending teen access to AI characters while overhauling safeguards, Fox Business reported. Gen Z forms ChatGPT's most active users, and OpenAI's Disney partnership could boost youth adoption, heightening safety needs, sources indicated.

Experts express doubts. "Behaviour-based age prediction isn’t reliably tied to someone’s real age. Adults can have interests that look 'young' on paper. Children, meanwhile, may learn to bypass the system," said Sam Stockwell of the Alan Turing Institute.

Leanda Barrington-Leach of the 5Rights Foundation added: "You cannot use the data of a child under 13 without parental consent," raising GDPR compliance questions.

Implications and Expert Skepticism

The update highlights tensions in OpenAI's approach. While tightening teen protections, the company plans an erotica feature for adults this quarter, restricted but raising concerns about emotional reliance, mental health experts said.

"Introducing sexually explicit content into a system already known to foster emotional reliance risks intensifying attachment and exposing vulnerable users – of all ages – to harms the company may struggle to control at scale," an unnamed expert stated in Observer coverage.

Child-safety researchers question the model's effectiveness. It remains untested at scale, with no released accuracy metrics, according to the Alan Turing Institute. Potential bypasses through behavioral mimicry worry advocates, and GDPR alignment for under-13 data processing stays unclear, 5Rights Foundation noted.

This reflects an industry shift. Platforms like Meta recalibrate amid lawsuits and rules, but OpenAI's moves appear reactive, driven by litigation rather than initiative, analysts observed.

The erotica paradox stands out. Restricting teens while adding adult features could blur lines, especially if age detection falters, critics argue.

Battery Wire's Take: A Risky Half-Measure in the Face of Real Dangers

OpenAI's age prediction system looks good on paper, but it strikes us as a rushed fix that dodges deeper issues. By relying on unproven behavioral signals without transparent accuracy data, the company invites easy circumvention—kids are savvy, and this setup practically begs for workarounds. Worse, pairing teen safeguards with an erotica rollout this quarter screams hypocrisy; it prioritizes adult engagement over consistent safety, potentially amplifying harms like emotional dependency that lawsuits already spotlight. We predict regulatory backlash will force OpenAI to scrap or heavily rework the erotica plans within six months, or face steeper fines. Skeptics are right—this isn't innovation; it's compliance theater that leaves vulnerable users exposed.

What's Next for OpenAI and Teen AI Safety

OpenAI eyes EU expansion for the system soon, with the erotica feature slated for Q1 2026, company notes indicate. Lawsuits continue without settlements reported, and enforcement of the Model Spec across versions, including Disney integrations, remains undefined.

Testing at scale lacks details, and questions linger on monitoring bypasses. Industry watchers expect more scrutiny from Ofcom and EU bodies, potentially mandating independent audits.

OpenAI claims the model outperforms standards, but without metrics, verification stalls. Parents and advocates push for clearer consent mechanisms, especially under GDPR.

🤖 AI-Assisted Content Notice

This article was generated using AI technology (grok-4-0709) and has been reviewed by our editorial team. While we strive for accuracy, we encourage readers to verify critical information with original sources.

Generated: January 10, 2026