Current Region:
Global

AI needs more than Age Inference to protect kids

April 22, 2026

AI needs more than Age Inference to protect kids

Every major AI assistant available to the public today primarily relies on inference and self-attestation to assess whether a user might be a minor (and this is also the initial strategy being adopted by most social media platforms in Australia). Both approaches are systematically vulnerable to low-effort circumvention. It is time to say so plainly.

Conversational AI systems have arrived in hundreds of millions of hands with remarkable speed. Children are among the most enthusiastic users. And yet the mechanisms these platforms use to spot and protect minors are, at best, heuristic guesswork and, at worst, a single unenforced checkbox. In the age assurance industry, we spend considerable time debating the merits of age verification versus age estimation, the appropriate buffer age for false positive management, and how to effectively mitigate privacy concerns around certain technical architectures. These are important conversations. But they presuppose that the platforms in question are, in fact, trying to verify or estimate age with meaningful rigour. Most AI platforms are not making sufficient use of highly effective age assurance. And the gap between what is claimed and what is actually delivered deserves direct scrutiny.

How AI Systems Currently Handle Age

The approaches in use across publicly available large language model (LLM) products broadly fall into two categories, sometimes combined:

Conversational inference
The model analyses the user’s language, vocabulary, topics, and interaction style in real time and forms a probabilistic estimate of whether they might be a minor. Protective measures, such as declining to discuss certain content, are applied when the model’s confidence that the user is a child crosses an internal threshold. As with any machine learning system, these models will improve over time — facial age estimation has clearly demonstrated this. However, average error rates are not published. There is no public disclosure of any buffer ages applied to reduce false positives and the underlying data (user language, topics, and interaction patterns) has inherent limitations for reliable age inference.

Cautious defaults for new users
At least one widely-used platform does apply conservative content restrictions to new accounts until sufficient conversational evidence has accumulated for the model to infer the user is likely an adult. Once that inference is made, restrictions are relaxed.  Most work the other way round and apply restrictions only after they suspect the user is a child.

Self-attestation at sign-up
A date-of-birth field or age confirmation checkbox at account creation. The platform records the response. No verification occurs.

Each of these approaches has a role to play in a layered safety architecture. None of them, individually or combined, constitutes age assurance. And each is vulnerable to exploitation in ways that are neither obscure nor technically sophisticated.

Vulnerability One: Gaming the Inference Model

Conversational inference works by pattern-matching. A child who writes in simple sentences, asks about cartoons or mentions their school will likely trigger protective responses. A child who does not behave like the model’s stereotype of a child will not. When asked at a recent conference how accurate their inference solution is, a leading AI provider could only respond “in line with industry standards” and declined to share what those standards were. The sequence required to defeat inference-based age detection can be described in a single social media post and easily replicated at scale.

This is not a theoretical concern. A user who opens a conversation with questions about mortgage refinancing, lease agreements or retirement savings will generate an immediate inference of adulthood – regardless of their actual age. By the time the conversation shifts to topics that would be harmful to a minor, the model has already categorised the user as an adult and relaxed its guard accordingly.

For platforms that apply cautious defaults to new accounts, the same logic applies: the inference threshold must be met before restrictions lift, but the pathway to meeting it remains entirely within the user’s control.

Vulnerability Two: Self-Attestation of an Inflated Age

Self-attestation is the weaker of the two approaches and needs less analysis. When a user is asked their age and types a number, the platform records the number. There is no mechanism in any currently available mainstream AI product that immediately checks whether the number is true.

A child who types 25 is treated as a 25-year-old. A child who is initially subject to cautious defaults and then simply states “I am 18” in conversation will, in most systems, trigger a reclassification. The model accepts the claim because it has no alternative.

This is not a criticism of the engineers who built these systems. It reflects a genuine tension: any synchronous identity check introduces friction. But acknowledging the trade-offs does not make the vulnerability disappear. It simply means the designers have made a choice, consciously or otherwise, to prioritise frictionless access over age assurance.

Why Conversational AI Is a Distinct Risk Category

The risks associated with age-inappropriate AI interactions are qualitatively different from those associated with, say, an age-gated website hosting adult content or even user-generated content uploaded to social media.  A static page can be categorised, flagged and filtered. A live conversation is a moving target; the risk doesn’t just exist at the start of a session, but can emerge through the ‘drift’ of a dialogue.

A child interacting with a capable AI assistant can, within a single session, be exposed to radicalising ideologies, detailed harmful content and sophisticated persuasion — not because they sought it out, but because a conversation drifted, was steered, or escalated in ways that no content classification system can anticipate in advance. The AI’s apparent authority and apparent understanding make it a uniquely effective vector for harmful influence on younger users.

The platforms themselves have acknowledged some version of this risk. Their response, to date, has been inference and self-attestation. That response is not adequate.

Highly Effective Age Assurance at the Point of Escalation

We are not calling for every AI interaction to be gated behind a passport check. That would be disproportionate and commercially impractical. What we are calling for is a risk-proportionate response that matches the actual threat model.

Rather than the uncalibrated age inference currently performed by LLMs, the appropriate mechanism is certified age estimation (e.g. using biometric analysis with a buffer age applied to minimise false positives at the margin) or age verification (using validated, bound attributes) or indeed more reliable methods of age inference (e.g. from extensive metadata associated with email address or mobile/cell numbers), but only where delivered through certified, trusted solutions proven to work, and triggered at the point at which a conversation develops towards a high-risk topic area. Critically, this can be delivered in a way that is genuinely privacy-preserving:

Local on-device processing
There is already the option for age estimation via facial analysis or identity document verification to be performed entirely on the user’s device. No biometric data or document image leaves the device. Even where processed in the cloud, any personal data is encrypted for transmission and immediately deleted once age is established by a supplier.  Through the latest interoperable ecosystems, users can be given a choice of supplier to select the one they trust, temporarily, with their data.

Anonymous signals
The device produces only a single output: a certified 18+ (or any other required age, depending on context) confirmation. The AI platform receives this signal and nothing else.

Double-blind architecture
Neither the AI platform nor the age assurance provider can link the verification event to the user’s identity. The signal is anonymous by design.

Trusted certified providers
The verification is issued by a provider certified under a recognised framework, regularly, independently audited (against ISO 27566-1 and IEEE 2089.1), and, if it is an App, usually also subject to review by the store from which it is downloaded, providing ongoing assurance.  Auditability does not require identity retention, so this also effectively removes any demand to retain data for audit or legal review.

This architecture addresses the privacy objection while delivering assurance that cannot be defeated by typing a false date of birth or asking clever opening questions.

“Until these vulnerabilities are fixed, age assurance, certified as highly effective, should be required — not as a counsel of perfection, but as the minimum standard a responsible platform should be able to meet.”

Conclusion

Age inference, as currently deployed by AI, is not age assurance. Self-attestation is not age assurance. A model that accepts “I am 18” as a verified fact is not protecting children – it is even more dangerous because it is providing the appearance of protection while leaving the vulnerability intact.

Until the inference and self-attestation vulnerabilities described in this article are fixed by some means other than better guessing, highly effective age assurance using trusted, certified solutions should be considered the minimum standard for AI platforms accessible to the public before topics that put children at risk are permitted.

This is not a counsel of perfection. It is the minimum standard a responsible platform should be able to meet, using technology that exists today, in a manner that can fully respect user privacy. The industry has solved harder problems. This one is waiting only for the will to address it.

The Age Verification Providers Association represents the leading providers of age verification and age estimation technology. We work with platforms, regulators and policymakers to develop proportionate, privacy-preserving frameworks for protecting children online.