Jason Kelley and Adam Schwarz of the Electronic Frontier Foundation recently published an article arguing that “Age Verification Mandates Would Undermine Anonymity Online.“
We must start by thanking them for this contribution to a debate that will only grow in importance as more states in the US, and globally, seek to pass legislation that applies a higher level of protection to children than adults in the online world, as we have done for a century in real life.
2023 is the centenary of the first law in the UK setting the minimum age for buying alcohol at 18
As the global trade body representing suppliers of age verification (AV) technologies, we hope an equally measured and thoughtful response is helpful in shedding more light on this issue.
First of all, the authors are absolutely right to identify a risk to privacy as the fundamental concern. The specialist AV sector was established in anticipation of a law passed in 2017 requiring British users to prove they were at least 18 before they accessed pornography. From the outset, AV providers knew that users would be very concerned about their online activities being tracked, and the risk of blackmail or of exposure through hacking. But in the wake of major breaches such as Ashley Madison in 2015, the adult sites themselves were equally as worried about creating a new attack vector for hackers who could destroy their business overnight.
So from the start, privacy-by-design through data minimisation has been a founding principle of the AV industry. Our members do not create new central databases of either identities or online behaviour. Neither users nor clients would risk this. The only unhackable database is no database at all – and that principle underpins the design of age verification solutions. Instead, once age has been established, personal data accessed by the provider for this purpose is deleted and users are anonymised. Sites seeking to know if a user is old enough are only informed “yes” or “no” and no record is kept of which site enquired about which user. So there are no risks of “misuse, theft or subpoena” except during the moments an age is being ascertained, which is in any case designed to be a secure process. While the French data protection authority, the Commission nationale de l’informatique et des libertés (CNIL) is going even further and developing a “double-blind” cryptographic mechanism to remove any capability of an AV provider knowing which website a user is accessing (which we would support and aim to adopt if it works effectively), it still recommends the use of third-party age verification providers at present:
“In order to preserve the trust between all of the stakeholders and a high level of data protection, the CNIL therefore recommends that sites subject to age verification requirements should not carry out age verification operations themselves, but should rely on third-party solutions whose validity has been independently verified.” Source
The authors also express widely held concerns about “facial recognition.” It is worth reviewing the useful distinctions between the various types of facial technologies, as clearly outlined by the Future of Privacy Forum. They explain the clear difference between detecting a human face and analysing it – which is quite distinct from facial recognition 1:1 or 1: many.
Facial recognition is not a feature of the age verification industry. Facial analysis for the purposes of estimating age does not use enough data to uniquely identify an individual, and in any case, is not retained once the estimate has been established, so there can be no question of it being used for recognition. Concerns about discrimination are outdated and misplaced. The papers cited in the article above are four and five years old – a lifetime in technology – and studied gender classification and face recognition not age estimation. The US National Institute of Standards and Technology (NIST) itself concluded that:
“with adequate research and development, the following may prove effective at mitigating demographic differentials with respect to false positives: Threshold elevation, refined training, more diverse training data.” Source
The UK Information Commissioner’s Office after reviewing facial age estimation has clarified:
“it is, in some contexts, possible to use biometrics to make a decision about an individual or treat them differently without using that biometric data for the purpose of uniquely identifying that person. We have updated our special category guidance to recognise that there may be specific use cases where technology can be used to estimate age without uniquely identifying an individual.”
And this is exactly what has happened in the intervening five years. Estimation software is now tested for bias, results published and thresholds can be set to ensure that it has no impact on users. Again, the French regulator and champions of liberty do not oppose this method of age assurance and say:
“To limit the risk of video capture and possible blackmail, age verification solutions using facial analysis should be certified and deployed by a trusted third party in accordance with precise specifications.” Source
The German age regulatory bodies KJM and FSM have reviewed facial age estimation over several years and engaged external experts, and approved these as methods for access even to the most sensitive adult content, with an age buffer of 3-5 years.
As mentioned above, privacy-preserving, third-party age verification was mostly invented in Europe, where of course since 2018, we have had strict privacy laws, the General Data Protection Regulations (GDPR). Those of us based in the EU do need to remind ourselves that this is not a global regime, and particularly does not apply in the USA.
So how can we reassure American users? Well, first there is the argument already made, that it is not in the commercial interests of pornographic sites to add to their risk of being hacked and the catastrophic consequences that would follow. But as an industry, we recognise we need to be seen to go further. So we have operated, since the publication of the first international standard for age verification in 2018 (BSI PAS 1296:2018), a certification scheme that requires accredited auditors to pre-emptively check that age verification providers not only produce accurate results, but also systematically protect user data and privacy. The IEEE and ISO are developing updated standards too. Our trade association also has a Code of Conduct which we encourage governments to translate into law, as we expect in the UK shortly.
And the inevitable evolution of the age verification market will soon lead to interoperability – a concept proven by a recent EU-funded pilot – euCONSENT. This will allow users to prove their age once, and re-use that check across multiple sites. So you might prove your age to order wine, and find you can then seamlessly access adult content. For this to work, competing providers need to rely on each other’s checks, so common standards and regular audits will be required of members of such a network.
Most users will first verify their age to the most popular platforms – perhaps when opening an Instagram account at 13. So, in that example, Meta will have played a role in selecting the age verification provider, and done its own due diligence. As that user turns 18 or 21, they may be asked to prove their age to a higher standard so they can access legally age-restricted goods, services and content. But that check can then be the one they use for all other purposes across the internet.
While the sector shares a vulnerability to “fly-by-night” companies and outright phishing sites with financial services, healthcare and other sensitive industries, users will not be sent to such sites by Walmart or LinkedIn, or even MindGeek (which already uses our members to provide independent age-checks in France). And those outliers will not be accepted into an interoperability network, marginalising them and flagging them clearly as higher risk.
Inclusivity is also important – we recently published a blog on this. Estimation methods help those without ID documents. Interoperability can help those who can’t use mainstream solutions or need help in using them so they need only achieve a successful check one time and can then rely on that in future.
Errors will occur, but our Code of Conduct, and the example set by GDPR and emerging UK legislation, places a clear duty on data controllers to correct inaccuracies. Many AV providers will use a combination of methods to check age – so if a credit report wrongly suggests a user is underage, the user can select an alternative method, or may even find the provider does so itself in its attempt to ensure its client can gain a new user or make a sale legally.
In summary, technology gets smarter every day. Asking tech to allow us to prove our age online without disclosing who we are is not rocket science. We will continue to work to improve the range of solutions on offer, and to protect user data and preserve privacy through the design of age verification systems.
We welcome the scrutiny and engagement of the EFF and others as we seek to enable better protection for children while preserving the freedoms of adults online.