The UK Government is considering it, and their Conservative Party has already announced that it intends to follow Australia’s lead by setting a minimum age of 16 for social media. Critics of the policy claim that it is not working in Australia, but the evidence does not support this premature conclusion.
We are preparing a formal lessons-learned paper drawing on our members’ experience and expert judgement since the Australian rules came into force on 10 December 2025. In the meantime, several early points are worth highlighting:
First, claims that age limits are “too easy to circumvent” largely reflect platform implementation choices, not technical limits. Social media platforms have opposed age-based access rules and are applying age checks reluctantly. In some cases, they have failed to deploy well-established safeguards such as liveness detection to prevent the use of photos or avatars, or buffer ages where age estimation tools test several years above the legal threshold to reduce false positives. Where those protections are weakened or disabled, circumvention becomes easier by design.
Second, assertions that the policy is ineffective because of VPN use need to be grounded in data. UK evidence following the introduction of mandatory Highly Effective Age Assurance shows that VPN usage did increase, rising from around 650,000 daily users to a peak of just over 1.4 million in mid-August 2025. However, usage then declined steadily to around 900,000 by November, well below the peak and far from universal adoption. This pattern mirrors what we expect in any regulatory transition: short-term experimentation followed by normalisation, not mass, sustained evasion.
Crucially, the legal and regulatory test is not whether circumvention is theoretically possible, but whether children can normally access age-restricted services. No online safety measure is absolute, but that has never been the standard applied to other forms of child protection such as buying beer.
Third, claims that Australia’s approach is ineffective overlook the regulator’s deliberately phased rollout. The eSafety Commissioner has prioritised new accounts and obvious cases first, while allowing platforms to identify and challenge likely under-16 users over time using account signals and behaviour. Given historic age inflation by under-13 users, this was always going to be a multi-stage process. The framework explicitly provides for review after six months, at which point empirical evidence should inform whether requirements on existing accounts need to tighten. Ofcom will learn from those lessons.
The Australian regime is still bedding in. The right question is not whether perfection was achieved on day one, but whether the regulatory framework supports iteration, evidence-based adjustment, and meaningful protection for children.
(This post first appeared on LinkedIn where it received the stamp of approval from the eSafety Commissioner in the comments)
