We set out below a short response to the main arguments put forward in a “Joint statement of security and privacy scientists and researchers on Age Assurance” signed by over 400 academics. We welcome this new level of interest in the field but their letter raises important questions about privacy, security and centralisation. However, its central weakness is structural: It evaluates age assurance through the lens of worst case, centralised and identity heavy implementations, and proceeds on the implicit assumption that only such models are feasible. In doing so, it generalises from poorly designed or centralised deployments to the overall concept of age assurance itself, treating flawed examples as evidence of inevitable failure rather than asking whether carefully designed, standards based, decentralised, tokenised and multi method, privacy engineered alternatives that directly address the risks identified can proportionately reduce harm in practice while preserving fundamental rights.
We address their main arguments briefly in turn:
1. Scope and Privacy Architecture
The Scientists’ Claim:
Age assurance would require universal proof of age for routine online activities, exceeding offline norms and creating an infrastructure prone to surveillance, profiling and centralisation.
Response:
Age assurance does not require universal identity disclosure. Privacy preserving architectures enable verification of an age attribute without revealing identity or enabling cross service tracking. The scientists rightly criticise naive implementations, but double blind and attribute based designs directly address those concerns. Properly implemented systems minimise data flows rather than expand them. The policy debate should distinguish between flawed centralised models and privacy engineered designs that avoid creating new identity databases entirely.
2. Circumvention and Effectiveness
The Scientists’ Claim:
Age checks are easily bypassed through VPNs, borrowed credentials, purchased accounts, deepfakes or other tools, making them ineffective.
Response:
No regulatory safeguard achieves perfect compliance. Public policy is based on harm reduction, not theoretical invulnerability.
Regular re verification and authentication reduce credential sharing. Purchasing fake credentials requires online payment instruments not readily accessible to most children.
There is no need to ban VPNs to make age assurance effective. Technical and behavioural indicators can reveal probable location inconsistencies and anomalous usage patterns, as reflected in regulatory guidance such as the Australian eSafety Commissioner’s Reasonable Steps framework.
3. Age Estimation and Inference
The Scientists’ Claim:
Biometric estimation and behavioural inference are privacy invasive, unreliable and biased, and can be fooled by simple props or AI generated images.
Response:
Age inference generally relies on data already provided freely by the user for core platform functionality. It does not inherently introduce new surveillance vectors if properly governed. Standards compliant age estimation incorporates liveness detection and anti spoofing controls. It cannot easily be circumvented by fake beards or static photographs when implemented to recognised technical benchmarks.
Bias concerns are addressed through audited accuracy thresholds and by offering users multiple assurance methods. Diversity of methods mitigates discrimination risk and avoids forcing individuals into a single modality. Specifically, older adults typically pass facial age thresholds comfortably, reducing rather than increasing exclusion even if they lack documents or capability to use other methods.
4. Infrastructure and Government ID
The Scientists’ Claim:
Effective age assurance would require a complex global trust infrastructure based on government issued digital IDs, which does not currently exist.
Response:
Government ID is not the only effective solution and many people would prefer not to involve the state in this process. Tokenised and reusable age attributes can operate without exposing underlying identity. Decentralised credential ecosystems already function in other regulated sectors e.g ISO 18013 mobile driving licenses. Age assurance does not require a monolithic internet wide identity infrastructure. Sector specific or jurisdictional systems can operate independently while remaining interoperable. The infrastructure challenge is real, but it is engineering complexity, not conceptual impossibility.
5. Migration to Fringe Sites
The Scientists’ Claim:
Users excluded from mainstream platforms will migrate to unregulated fringe sites, increasing exposure to scams or malware.
Response:
Migration risk argues for consistent enforcement, not policy abandonment. The majority of minors use mainstream platforms and find alternatives disappointing (e.g. Australia saw only temporary increased interest in unregulated social media options). Safeguarding those environments yields the greatest protective impact. Just because some bars do not check age carefully, society does not abandon minimum ages for drinking alcohol. It seeks better enforcement..
Crucially, the more important use case is preventing adults entering child specific spaces to groom or exploit. Even partial reductions in adult infiltration on mainstream sites materially reduce risk of abuse.
6. False Sense of Security
The Scientists’ Claim:
Circumvention will allow predators into child designated spaces, creating misplaced trust.
Response:
Age assurance should never be presented as a standalone solution. It is one layer within a broader safeguarding architecture including moderation, behavioural detection, parental controls and reporting systems. A layered model avoids over reliance on any single control while still raising barriers against predatory behaviour. It is the aggregate impact which matters.
7. Privacy, VPNs and Security Tools
The Scientists’ Claim:
Age assurance pressures policymakers to restrict VPNs and increases data exposure, thereby diminishing security and privacy.
Response:
Age assurance does not require banning VPNs. Detection of technical and behavioural indicators of location and age can address evasion without prohibiting legitimate privacy technologies.
Where privacy enhancing technologies are used, they do not inherently require dependence on major phone manufacturers. Browser based wallets, passkey infrastructure and progressive web applications provide broadly accessible implementation paths.
8. Inequality and Digital Exclusion
The Scientists’ Claim:
Requirements for digital credentials or smartphones will exclude elderly users, undocumented migrants or those without compatible devices.
Response:
Exclusion risk arises from single channel design, not from age assurance itself. Systems can offer multiple methods, including document checks, payment card, email or mobile number based verification, reusable credentials and facial age estimation that does not require smartphone ownership. Professional attestation can be a backstop.
Inclusion must be a design principle. With method diversity and proportionate thresholds, age assurance can avoid the structural discrimination the letter anticipates.
9. Centralisation and the “End to End” Principle
The Scientists’ Claim:
Age assurance centralises control and contradicts the end to end principle of internet design.
Response:
The end to end principle is an engineering heuristic, not a constitutional doctrine. The internet already incorporates regulatory controls in areas such as financial compliance, fraud prevention and child sexual abuse material detection. Age assurance is not a novel architectural rupture but an extension of established societal governance norms.
10. Evidence of Harm and the Moratorium Proposal
The Scientists’ Claim:
There is insufficient scientific evidence that restricting access improves wellbeing, and deployment should pause pending consensus.
Response:
It is difficult to argue there is “no scientific evidence” of harm when extensive literature documents links between early exposure to pornography, grooming environments, algorithmically amplified self harm content and adverse developmental outcomes. The absence of perfect causal proof does not justify inaction where credible harm and feasible mitigation coexist.
Real world deployments in the UK and Australia are already generating operational data. Iterative implementation with evaluation is more proportionate than indefinite delay. Waiting for unanimity while exposure continues carries its own risks, and the precautionary principle should be considered carefully for the highest risk use-cases.
Concluding Observation
The open letter correctly identifies real implementation risks. However, many criticisms assume either flawed centralised models or an unrealistic standard of perfect effectiveness. Privacy preserving, decentralised, standards based age assurance can materially reduce harm when implemented as one component of a layered child protection framework. The age verification sector has been acutely conscious of these concerns for years, and continuously evolves to mitigate the risks.
The choice is not between total surveillance and total inaction. It is between imperfect but ever-improving safeguards, and continued unmitigated exposure of minors to well-documented online risks.