Fraser Nelson, Editor of The Spectator, has quite reasonably raised a list of questions he still has about the impact on free speech of the Online Safety Bill in this article: “The Online Safety Bill is still a censor’s charter“. As many of these refer to age assurance, we have sought to respond below:
How are tech companies supposed to distinguish between adults and kids?
Online services will be expected to apply a proportionate degree of age assurance. Where the risks are lower, then the latest age estimation techniques which will protect the majority of underage users but may allow some a year or two below the minimum age to still have access, will be sufficient (these use artificial intelligence software to estimate the user’s age based on, for example, how they look or speak). For higher risk content and functionality, then stricter, more traditional age verification measures (those based on ID documents or the records of banks, mobile networks or credit reference agencies) will be needed to confirm an actual date of birth. It is worth noting that once a user is a few years over the legal age, estimation methods are again effective, with a 3-5 year buffer above the legal limit (say 21) allowing almost completely effective enforcement of a strict limit (say 18).
If they are expected to verify, does this mean the end of anonymous accounts?
No. The essence of online age assurance is being able to prove your age without disclosing who you are. Technology is actually quite clever and can facilitate this in a number of ways. Age estimation through facial image or voiceprint analysis reviews patterns of these features – and these patterns are not sufficient to recognise or uniquely identify any individual. The image is instantly deleted. The processing can also take place on your own device, with no data ever leaving your phone or PC. And if the more specific age verification techniques are used, then these should be completed by independent, regulated third parties whose systems have been created using privacy-by-design and data minimisation principles. Furthermore, the ICO will ensure that these services do not abuse personal data. None of them creates any central databases of personal information attractive to hackers. They either destroy any personal data once the age is confirmed, or if a digital identity is chosen, the data is encrypted with a key that only the user themselves possesses, enabling only that user to agree to share just an age confirmation such as ‘over 18’. Of course, some trust and regulation is required – but no more than we rely on to keep our banking or health data secure already. But no-one is demanding that you hand Elon Musk your passport before you can set up a Twitter account.
If they are expected to guess, how likely are they to guess right? And distinguish between an 18-year-old and a 19-year-old?
As explained above, age assurance is applied on a proportionate basis. If a 19 year-old finds they fail an estimation test for 18, they will always have the ability to use an alternative verification method using their actual date of birth.
What risk is there of adults being subjected to a censorship regime intended for kids? If the government’s case rests on the power of age-guessing algorithms, where is the evidence of their accuracy?
The leading algorithms deliver accuracy to within 1½ years mean average error. A human estimates with 6-8 years of accuracy. So, when checking if someone is 18, a small percentage of 16½ year olds may scrape through and conversely a handful of 19½ year olds may fail. But the majority of under 18s will fail, and the majority of over 18s will pass.
The Age Check Certification Scheme, approved by the government’s UK Accreditation Service, audits and certifies age estimation systems so there is clear, independent evidence of their accuracy (see their registry of tested solutions here).
The current accuracy rates (Mean Absolute Errors) are:
- 2.96 years for 6-70 year-olds.
- 1.52 years for 13-19 year-olds.
- 1.56 years for 6-12 year-olds.
The True Positive Rate (TPR) for 13-17 year olds correctly estimated as under 23 is 99.65% i.e. applying a 5 year “buffer” to check users are 18+. (For false positives, false negatives see pages 32-37 in their Paper)
Michelle Donelan, the Culture Secretary, has just been on the radio saying fines will be crushing (‘we’re talking tens of billions of pounds’) for Big Tech firms who violate the censorship clause. Given that such firms make virtually no money from current affairs and political discussion (they’d drop it all if they could as it represents a massive regulatory risk and negligible cash), won’t they just let the censorship bots rip on all content – rather than risk ‘billions’ on the accuracy of age-targeting algorithms?
Regulation will be proportionate. The ICO recently published research into the measurement of the accuracy of age assurance, demonstrating they are attuned to the way this technology works, and understand the margins for error. They can refer to international standards from BSI, IEEE and ISO when specifying their requirements, so Big Tech will not be at risk of massive fines if they comply with the guidance the regulators will publish and make use of the available technology.
If online firms can’t know user ages for sure, and are on the hook for billions if they mess up, surely they will err on the side of content removal?
Both the ICO and OFCOM, who recently published a statement confirming they are working in lockstep, are clear that they will be applying this legislation in a proportionate matter, taking into account the capabilities of the latest technological solutions. So, if platforms follow the guidance, regulators will be obliged to acknowledge their efforts, and if the authorities did not do this, then the courts would certainly overturn any attempt to impose fines.
We hope this is reassuring for Mr Nelson and his readers, but are always pleased to respond to further questions, or debate this topic in more detail.