In an evolving effort to safeguard younger users, Meta has announced a significant update to its age-verification tools — the company will now use artificial intelligence to detect teen users and automatically assign them to stricter safety settings, even if the user has entered a false adult age during sign-up.
This move applies across Meta’s major platforms including Instagram, Facebook, Messenger, and Threads, and marks a new chapter in the company’s ongoing initiative to protect minors online. In a blog post published on Monday, Meta confirmed that it will begin proactively identifying accounts that it believes belong to teens based on behavioral and AI-based analysis. Once flagged, these accounts will be placed into what the company calls “Teen Accounts,” a safety-focused experience that restricts certain interactions, content visibility, and time spent on the platform.
“We’ve been using artificial intelligence to determine age for some time, but leveraging it in this way is a big change,” Meta said.
The new system has raised eyebrows for its bold approach — not only will it override the user-provided birthdate if AI suspects the person is underage, but it will do so automatically. Users can later adjust their settings if the system makes a mistake, though Meta admits the process is not flawless.
The “Teen Account” features were first introduced in 2024 and include a suite of protections designed to create a more age-appropriate environment. These include limits on who can message the user, restrictions on certain types of content, and screen time reminders. Parental controls are also part of the mix, with users under 16 required to get guardian approval before making changes to key settings. Meta reports that 97% of under-16 users have kept these features enabled — a sign, the company says, of their value and effectiveness.
This AI-powered expansion of teen protection tools comes amid mounting pressure from global lawmakers, child psychologists, and parental advocacy groups who have criticized social media companies for not doing enough to protect the mental health and digital well-being of children. Platforms like Instagram and TikTok have especially come under fire in recent years for failing to regulate harmful content and addictive usage patterns among young users.
To encourage transparency and foster healthy digital habits, Meta also plans to send notifications to parents of teen users, especially on Instagram, offering guidance on digital safety and age-appropriate online behavior. The company emphasized the importance of honest age disclosure, but also acknowledged that more robust solutions — like age verification via app stores and parental approvals — are needed at an industry-wide level.
However, not everyone is convinced. Critics argue that AI-powered age detection could become invasive or inaccurate, especially when users have minimal digital history. Devorah Heitner, author of Growing Up in Public, voiced concerns over potential privacy implications.
“For AI to be effective at determining user age, it would have to know more than it should, especially for a newer user with a limited digital footprint,” she said. “We need to see social apps do more to protect all users from invasive algorithms and harassment rather than focus solely on age-gating.”
Still, Meta’s latest update signals a stronger stance on youth safety in a competitive tech landscape where AI is increasingly shaping user experience. While the move may spark privacy debates, it also highlights a growing trend: the shift from reactive to proactive safety mechanisms in the digital age.