Current Region:
Global

AVPA responds to Ofcom Consultation

July 16, 2024

The AVPA has responded to Ofcom’s consultation on “Protecting children from harms online“.

The Association has highlighted the risks arising from:

  1. Not defining clearly what level of accuracy is required of “Highly Effective Age Assurance” and
  2. Not requiring any attempt as age assurance to enforce mimimum ages in terms of service, and as required under GDPR, for children to open social media accounts, despite very clear commitments to this given repeatedly by Ministers to Parliament.

Our full response is below:

1. We note the guidance has not yet defined HEAA and refer back to our suggestion made in response to the illegal harms consultation that it could be phrased as follows:

“Highly effective age assurance systems must demonstrate that their certified expected outcomes are such that more than 95% of children under 18 are prevented from accessing primary priority content, and more than 99% of children under 16 are prevented.”

We believe this is consistent with the Act:

Safety duties protecting children: interpretation

“provider is only entitled to conclude that it is not possible for children to access a service, or a part of it, if age verification or age estimation is used on the service with the result that children are not normally able to access the service or that part of it.”

In the absence of a clear definition by Ofcom, we will need to define Highly Effective Age Assurance ourselves, through industry standards and certification schemes. We will include as part of that development process, the outcome of the investigation into the application of facial age estimation, where Ofcom is considering whether or not the reduction in the test age from 23 to 20 may have resulted in a failure to meet Ofcom’s requirements (under the interim Video Sharing Platform regime) to “effectively protect under-18s from pornographic material”. We are assuming in doing so that the standard being applied for VSPs is intended to be the same as will be expected for HEAA.

In the absence of numerically specific guidance as to what meets the requirement for HEAA, it is impossible for platforms to know at which level of assurance to set their solutions. This lack of clarity is in fact a deterrent for others to act, ahead of any clarity as to the definition for HEAA.

It will emerge what is and is not sufficient to count as “highly effective”, based on adjudications of complaints and investigations. But in the absence of metrics, there will be a race to the bottom; less scrupulous platforms will document that preventing 70% of users who are under age from being exposed to primary priority harms is highly effective in their opinion. Ofcom may challenge that. The courts will struggle to determine that 70% is not sufficient if Ofcom cannot say what figure would be sufficient.

And without some attempt to assure age at 13, which we address below at Q31, there will be little or no impact when Ofcom repeats its surveys of the age at which children are being considered adults online because they first opened a social media account by lying about their age, and platforms have computed when they turn 18 based on that misleading data point.

Minimum Age under terms of service
The proposed regulations abandon any attempt to enforce the minimum age required by a platform’s terms of service, or, in cooperation with the ICO, the age of digital consent. These are clear statutory requirements which Parliament expects, not least because the Minister promised it, Ofcom and the ICO to enforce.

On the 29th November 2022, the then Secretary of State announced changes to the Online Safety Bill:
“The Bill’s key objective, above everything else, is the safety of young people online. Not only will we preserve the existing protections, I will table a number of amendments that go further to strengthen the existing protections for children in the Bill to:
• make clearer the existing expectations of platforms in understanding the age of their users and, where platforms specify a minimum age for users, require them to clearly explain in their terms of service the measures they use to enforce this and if they fail to adhere to these measures, Ofcom will be able to act.”

Consequently, at the Bill’s Report Stage on 5th December 2022, the then Minister, Paul Scully, confirmed:

“The Bill’s key objective, above everything else, is the safety of children online, and we will be making a number of changes to strengthen the Bill’s existing protections for children. We will make sure that we expect platforms to use age assurance technology when identifying the age of their users, and we will also require platforms with minimum age restrictions to explain in their terms of service what measures they have in place to prevent access to those below their minimum age, and enforce those measures consistently.” (Hansard, Volume 724, Column 46)

In answer to a question by a Labour MP Mike Amesbury about the risk of children circumventing age reassurance (sic), the minister went on to say

“As I said, the social media platforms will have to put in place robust age assurance and age verification for material in an accredited form that is acceptable to Ofcom, which will look at that.”

We also note the clear requirement under UK GDPR Article 8 for parental consent before processing the personal data of children under 13 on the legal basis of consent for which the ICO remains responsible to enforce, but has been reluctant to do so until the Online Safety Act regime comes into force, in the interests of consistency. If Ofcom is not going to take a position on this, then the ICO no longer has any excuse to delay its enforcement of this important data protection measure, but this will lead to the inconsistent and misaligned regulatory regimes which we have been told we needed to avoid.

Ofcom rules out Highly Effective Age Assurance (HEAA) for this purpose “given we have limited independent evidence that age assurance technology can correctly distinguish between children in different age groups to a highly effective standard and, given this, there is a risk that this could have serious impact on children’s ability to access services.” We strongly dispute this conclusion.

First, this is a strawman argument, because it is widely recognised that strict age verification for children, given the more limited data sources, would be impractical as a general requirement at 13. This is why we, and others, have long argued for a lower standard of age assurance, perhaps termed “Broadly Effective Age Assurance” could be introduced to at least begin to reduce the age at which children are regularly opening social media accounts below the minimum age required by terms of service and UK GDPR.

This could be simply set to test if users appear through facial age estimation (or other methods such as email estimation) to be under 13. But we appreciate that Ofcom is concerned, as Melanie Dawes explained to the Today Programme on BBC Radio 4, that this could wrongly exclude too many children over 13. We believe that is easily solved, as we will explain below.

But as a first step, it is perfectly feasible with today’s state-of-the-art age estimation technology to begin to prevent very young children from opening accounts, without an unacceptable degree of exclusion due to false negatives.

For example, if Ofcom accepts that facial age estimation can be, for example, 99% effective with a two year “buffer” age, it could at least require that children who appear to be under 11 to an estimation algorithm, are prevented from opening accounts. This would mean under 0.5% of children who are in fact 13+ would be “false negatives” and would need to find some alternative way to prove their age, but would have immediately curtailed access by almost every child under 9, most 9-year-olds and majority of 10-year-olds.

But it is also feasible to tighten the control further without an unacceptable degree of exclusion of children old enough to open an account without parental consent and in line with the usual minimum age of 13.
If a child who is 13 or older is wrongly classified as being underage, there are a number of methods of age verification which are highly effective and are already available to the vast majority of children.

First, the vast majority of children have a current passport, according to ONS figures. 7,219.650 children aged 13 or younger have a UK or non-UK passport. There are 9,654,163 children in this age group, implying that 74.8% of children would have had a passport by the time they reach 13. The figure rises to 80.4% for all under 18.

  • Research by insurers Admiral found the average age children take their first trip abroad is 8, and half of kids have travelled abroad before their 5th birthday, so penetration levels for passports are already very high at 13
  • ABTA reports that 58% of families with a child 5 or under went overseas in the past year, and 57% with children aged 6-15

Perhaps more importantly, if the main concern about applying estimation at 13, with a mean average error of, say, up to two years, then the question is how many of the children who might potentially become false negatives, children who are 13, 14 or even 15 who are wrongly estimated to be below 13, have a passport to help rectify this. The answer based on the same above referenced Official Statistics from ONS is 95.5%. So fewer than 95,000 children would be unable to use a passport to correct this. We would only need to find a contingency, such as vouching by a recognised professional before an alternative proof of age is issued, for any of those who fell into the false negative category.

Some of those 50,000 will have a bank account, which is another option for confirming the age of a minor, so it would be even fewer.

  • 2.8 million children had a bank account (2017) according to Nationwide, with 750,000 new accounts opened by them a year.

Industry could simply be required to underwrite the cost of these “age checks of last resort” to guarantee that no child is unable to prove their age, even if they are undocumented, and we would certainly be prepared to facilitate a suitable scheme to meet that condition

So, for the 2.5% or fewer children without existing documentation, then vouching is available, such as the process already operated by issuers of Proof of Age Standards Scheme cards (see Citizencard for example). This is a robust and audited approach to confirming age where documentation is not available. It would be easy for Ofcom to require that platforms make this option available at no additional cost to users to guarantee accessibility and inclusivity.

This is all achievable today. With some determination and will, Ofcom could go further to enable age verification at 13. The systems in place to check adult ages against authoritative databases such as the electoral roll or credit reference agencies, can easily be applied to confirm the age of children from relevant databases, if Ofcom takes the initiative to work with the owners of such data across government – education, health and benefits databases would all solve this problem.

We note the comments of eight peers who were regularly engaged in the consideration of the Act in the House of Lords which we endorse.

It’s possible that Ofcom officials are concerned that age assurance for children below the age of 18 is hard to achieve with today’s technology. If that is the case, we would respectfully suggest that this concern does not align with existing industry practice where a range of age assurance methods are already being deployed to estimate the age or age range of users for safety, privacy and commercial reasons. There is also clear evidence over the last decade that the regulated companies invest time and money in child safety technology when regulators require them to do so. And when legislation is in place, such as the Age-Appropriate Design Code, tech development has followed swiftly.

The Act anticipates that age estimation strategies will be part of the regulatory standards and so your decision to require a single standard of age assurance (“highly effective”) goes against the terms of the Act and the intentions of Parliament. We are bewildered at the decision to do nothing at all to protect children under 13, and at the same time give regulated companies safe harbour…

Throughout the Act’s passage through parliament, both HMG and Ofcom repeatedly assured us that the Act gave you the powers required to protect children. At no point did Ofcom raise concerns that the powers were insufficient, indeed when parliamentarians raised concerns about ensuring that age assurance was developed to create age-appropriate services, or that terms should be mandatory – we were told that ‘the Children’s Code would do that’. So, we are confused as to why you have chosen not to.”

It would not be in keeping with either the letter or the spirit of the Online Safety Act to give up on any attempt to enforce the minimum age required by terms of service, and indeed by GDPR and the age-appropriate design code. Ofcom can add regulations to mitigate fully the risk of exclusion through guaranteeing alternative options for verification for children who lack documents or records.

We draw the attention of Ofcom to the latest work of euCONSENT ASBL in developing a tokenized, double-blind, device-based interoperable solution for age assurance. This can be used to enable other sites and apps to easily assure the age of all users once they have completed a single, initial age check with a participating age assurance provider.

This solution will distinguish between differing levels of assurance, so it will be possible to require that only tokens created based on checks which meet the standard for Highly Effective Age Assurance can be used when that is required, while tokens from a “broadly effective age assurance” process could be accepted for lower risk use-cases.

Is App-Store based age assurance the silver bullet?

Is App-Store based age assurance the silver bullet?

There has been growing debate over whether app stores should take on the responsibility of age verification. Some tech companies, including Meta and Snap, have suggested that app stores should enforce these age restrictions, preventing underage users from downloading...

The VPN Fallacy

Why age verification laws should require more than an IP address to prove a user’s location Understanding “the VPN Fallacy” Some adult websites claim they comply with state-mandated age verification laws by blocking access from states where such laws are in effect....