Current Region:
Global

The changing landscape of age assurance – George Billinge

January 22, 2026

21 May 2024 | DefAI Project blog by George Billinge

In recent months, the global conversation has heated up. The Australian government has announced significant funding towards a trial of age verification, as part of a suite of measures to tackle violence against women and girls. Data protection authorities across Europe have begun to describe out approaches to delivering age assurance while respecting user privacy, as a number of US States have tabled legislation mandating the use of age assurance. In April, global stakeholders with an interest in age assurance gathered in Manchester for the first ever Global Age Assurance Standards Summit, which resulted in the publication of a Communique outlining a set of principles and guidelines to support the development of rights-respecting approaches to age assurance.

Among the stakeholders gathered in Manchester were members of the DefAI consortium, a joint UK-Swiss project funded by InnovateUK and InnoSuisse. The Swiss consortium members include Privately, who provide device-based biometric age estimation technology, and IDIAP, a research institute based in Martigny. The UK-based members include ACCS , who test and certify age assurance technologies, and the Age Verification Providers Association, the international trade body for age assurance providers.

Each method of age assurance comes with trade-offs. Biometric age estimation has emerged in recent years as an increasingly popular method of age assurance: its accuracy has improved, in recent years, and it avoids some of the common pitfalls of other methods of age assurance. In particular, it is more accessible than approaches that require ID documents or credit cards; it introduces less friction into users’ online experience than other methods; and, when performed on-device, it can be highly privacy preserving compared to methods that check third-party databases. Finally, while ID cards or email addresses can be forged or tampered with in order to circumvent age checks, it has historically been far more challenging for the average internet user to tamper with biometric data.

However, with the proliferation of generative AI, this may be set to change. It is becoming increasingly easy for individuals with consumer-grade devices to convincingly alter images of their faces or recordings of their voice. Research into defence mechanisms against AI-powered attacks on identity verification technologies has already started, as concerns about AI-enabled fraud have skyrocketed. But what about biometric age estimation technologies, which are designed to be far less intrusive than identity verification technologies? How are these technologies currently being undermined by generative AI, and what defence mechanisms might be able to address this?

Project DefAI is undertaking research to understand the nature of these attacks and developing technologies to detect and defend against them. Below, we set out what this looks like in practice.

Current attack vectors

The first step is reviewing existing methodologies for circumventing biometric age estimation, otherwise known as “attack vectors.” Broadly, these fall into two categories: presentation attacks, and injection attacks. Presentation attacks involve presenting a sensor (such as a camera or microphone) with fraudulent content, such as a selfie that has been modified with AI. Injection attacks are normally more technically sophisticated, involving the input of fraudulent data to a system, resulting in an incorrect output.

One of the first tasks of Project DefAI is to undertake a thorough review of such attacks, including both attacks that are currently deployed at scale, and emerging attack vectors. In doing so, we begin with two important assumptions:

1) Nothing is impossible to execute;

2) In the future, new attack vectors will emerge that learn from any defence mechanisms we develop.

Assuming the above probabilities are fixed, the next challenge is identifying the likelihood that a particular attack vector will be used to circumvent biometric age estimation technologies, both today and in the short-medium term. This enables the project team to prioritise where we focus our efforts when developing defence mechanisms.

There is no perfect metric to measure the likelihood that a particular attack vector will be used. As such, our approach is to use cost as a proxy for likelihood. If an attack can be performed on a consumer-grade device, using an app that is free or cheap to download, there is a high likelihood that it will be adopted. On the other hand, if a particular attack vector requires expensive specialist hardware to perform, it is unlikely to be adopted widely by internet users.

Reviewing defence mechanisms

Alongside this work, the project team are reviewing existing work on defending against AI-powered presentation and injection attacks in different contexts, such as identity verification for online banking services. Significant work has already been undertaken on these highly sensitive use cases, but the defence methodologies may not be suitable for privacy preserving, on-device age estimation technologies. As such, once the project team have identified existing defence methodologies, they will be evaluated according to four criteria:

1) Data minimisation / privacy: Our first priority is ensuring that any defence mechanisms are compatible with privacy-preserving approaches to age estimation, whereby no personal data need leave the user’s device.

2) Accessibility: The attack vectors we are focusing our researching on can be deployed on consumer-grade devices. As such, we need to ensure that any defence mechanisms we develop can be widely deployed. In addition, we need to ensure that the user experience of the biometric age estimation remains simple and easy to understand, including for individuals with different capacities.

3) Technical novelty: As our initial assumption made clear, future attack vectors will learn from our defence mechanisms. Ensuring the approach we develop is technically novel will make it more difficult for existing attack vectors to be quickly adapted to undermine our defence mechanisms.

4) Ease of benchmarking: Any novel technologies developed by Project DefAI ought to be auditable and measurable, in order to build trust and measure efficacy.

Through conducting surveys of both existing attack vectors and existing defence mechanisms, the Project DefAI Consortium aims to identify gaps in the current landscape. Could existing methodologies be combined or adapted for the use case of biometric age estimation?

Through ground breaking research and development, Project DefAI will lead the way in ensuring children can be empowered to have safe online experiences in the age of generative AI.