Meta has announced a new system that uses artificial intelligence to analyze visual cues, including height and bone structure, to identify users under the age of 13 on Facebook and Instagram. This AI-driven age detection technology represents a significant step in the company’s efforts to comply with child safety regulations and protect minors on its platforms. The system scans photos and videos for these physical indicators, not for facial recognition purposes.
How Meta AI Age Detection Works
Meta’s AI examines general themes and visual cues within user-uploaded content. The company clarifies that this technology does not identify specific individuals. Instead, it estimates a user’s general age by analyzing factors like height and bone structure. This visual analysis is then combined with other data points, such as text analysis of posts and user interactions, to increase the accuracy of underage account identification.
Also read: Medicare’s quiet bet on AI: A new payment model that most of tech hasn’t noticed
This multi-layered approach helps Meta detect accounts that may have misrepresented their age during registration. The system is currently operational in select countries, with plans for a broader global rollout. Meta intends to expand this technology to other parts of its apps, including Instagram Live and Facebook Groups, in the future.
Key Components of the Age Detection System
- Visual Cues: The AI analyzes height and bone structure from photos and videos.
- Contextual Clues: The system examines profile information, such as birthday posts and school grade mentions.
- Behavioral Analysis: The AI reviews user interactions and engagement patterns for age-related signals.
- Cross-Platform Scanning: The technology scans posts, bios, captions, and other profile elements.
Why Meta is Implementing This Technology
The announcement comes in the wake of significant legal and regulatory pressure. A New Mexico jury recently ordered Meta to pay $375 million in civil penalties. The court found the company misled consumers about platform safety and put children at risk. The ruling also mandated fundamental changes to Meta’s platform operations.
Also read: Altman testifies Musk once proposed handing OpenAI to his children during safety dispute
This case is one of many lawsuits that Meta and other major technology companies face regarding child safety. The new AI age detection system is a direct response to these legal challenges and growing public concern about minors’ online safety. Meta has also threatened to shut down its services in New Mexico, highlighting the high stakes of these regulatory battles.
Expansion of Teen Account Protections
In addition to the AI age detection system, Meta is expanding its “Teen Accounts” feature on Instagram. These accounts automatically place users into a stricter experience with enhanced safeguards. The company is rolling out this technology to 27 countries in the European Union and Brazil.
Teen accounts include several protective measures. Users can only receive direct messages from people they follow or are already connected to. The platform hides potentially harmful content and sets accounts to private by default. Meta also announced plans to bring this technology to Facebook in the United States, followed by the United Kingdom and the European Union in June.
Comparison of Teen Account Features
| Feature | Standard Account | Teen Account |
|---|---|---|
| Direct Messages | Open to all | Only from followers |
| Account Privacy | User choice | Private by default |
| Content Filtering | Standard | Enhanced harmful content hiding |
Age Verification Process for Deactivated Accounts
If Meta’s AI determines that a user may be underage, the platform deactivates the account. The user must then prove their age through the company’s age verification process. This process prevents the account from being permanently deleted. Users may need to submit identification documents or use other verification methods approved by Meta.
This verification process is critical for ensuring that legitimate users can regain access to their accounts. It also provides a safeguard against false positives from the AI system. Meta emphasizes that the AI is designed to be cautious, but the verification step ensures accuracy.
Industry Context and Expert Analysis
Child safety experts have long called for more sturdy age verification systems on social media platforms. The use of AI to analyze physical characteristics represents a new frontier in this effort. However, some privacy advocates raise concerns about the implications of scanning user photos and videos for physical attributes.
Meta’s approach combines multiple data sources to improve accuracy. This reduces reliance on any single method, which could be less reliable. The company states that the system is not facial recognition, addressing a key privacy concern. The AI only estimates age based on general physical cues, not individual identity.
Timeline of Meta’s Child Safety Efforts
- 2021: Meta introduces age verification for certain features.
- 2023: New Mexico lawsuit filed over child safety concerns.
- 2024: Court orders $375 million penalty and platform changes.
- 2025: Meta announces AI age detection system.
- 2026: Expansion of Teen Accounts to more countries.
Potential Impacts and Future Developments
The new AI age detection system could significantly reduce the number of underage users on Meta’s platforms. This would help the company comply with laws like the Children’s Online Privacy Protection Act (COPPA) in the United States. It also aligns with similar regulations in the European Union and other regions.
Meta plans to expand this technology to more areas of its apps over time. This includes Instagram Live, Facebook Groups, and potentially other services. The company continues to invest in AI-driven safety tools as part of its broader commitment to user protection. The success of this system could influence how other social media platforms approach age verification.
Conclusion
Meta’s use of AI to analyze height and bone structure for underage user identification marks a major advancement in online child safety. The technology combines visual analysis with contextual and behavioral data to detect accounts belonging to users under 13. This system operates in response to legal pressures and growing public demand for safer online environments for children. While the technology raises privacy questions, Meta emphasizes that it does not use facial recognition. The company’s expansion of Teen Accounts and age verification processes further strengthens its child safety framework. As regulatory scrutiny continues, Meta’s AI age detection system may set a new standard for the industry.
FAQs
Q1: How does Meta’s AI detect underage users?
The AI analyzes visual cues like height and bone structure from photos and videos. It also examines text, posts, and user interactions for age-related signals. This multi-layered approach helps identify accounts that may have misrepresented their age.
Q2: Is Meta’s age detection system facial recognition?
No, Meta explicitly states that this is not facial recognition. The AI looks at general physical themes and visual cues to estimate age. It does not identify specific individuals or match faces against a database.
Q3: What happens if Meta’s AI thinks I am underage?
Meta deactivates the account. The user must then go through the company’s age verification process. This may involve submitting identification or using other approved methods to prove their age and prevent account deletion.
Q4: Where is the AI age detection system available?
The system is currently operating in select countries. Meta plans to roll it out more broadly in the future. The company is also expanding the technology to other parts of its apps, including Instagram Live and Facebook Groups.
Q5: Why did Meta create this AI system?
The system responds to legal pressures and regulatory requirements. A New Mexico court ordered Meta to pay $375 million in penalties for misleading consumers about child safety. The company also faces numerous other lawsuits related to protecting minors on its platforms.

Be the first to comment