As the debate about age verification intensifies with several hearings in US federal courts challenging the constitutionality of state laws requiring age assurance for adult content (e.g. , Louisiana, Utah, Texas), using social media (Arkansas), and age-approrpriate design (California), a frequent argument is that the use of parental controls is a more effective alternative.
First, it must be noted that parental controls aim to deliver a very different policy objective from online age verification: controls give parents the power to implement restrictions if they choose to do so; age assurance allows the state to protect children from accessing spaces on the Internet that lawmakers have decided are not suitable for minors, just as they already ban them from bars, stripclubs and casinos in real life.
But if legislators decide to leave parents with discretion about where their children visit online, what is the likely outcome?
While we have not found equivalent research in the USA, in the UK, there is an annual report from the media regulator, Ofcom into Children and parents: media use and attitudes.
The 2023 Ofcom report shows:
- Seven in ten parents of children aged 3-17 said they had some form of technical control in place to manage their child’s access to content online, but
- only 13% of parents said they used security apps that can be installed on their child’s device to monitor the apps they use and for how long.
- The most-used technical controls were those that are built into the device by the manufacturer (34%)
The 2022 version of the report also found:
- Parents had high awareness of safety-promoting technical tools and controls (91%), but only seven in ten had used any of them (70%).
- The tools most likely to be used were parental controls built into a device’s software (31%).
- Just over a quarter of parents used content filters provided by their broadband supplier, where the filters apply to all devices using that service (27%).
- A much larger proportion (61%) said they were aware of this feature, showing that not all parents are adopting this potentially useful control.
- Among those who were aware of, but did not use, this technology, the most common reasons were
- that they trusted their child to be sensible or responsible when online (45%) or
- that they preferred to supervise their child’s online use by talking to them and setting rules (44%).
- Up to a fifth questioned their usefulness:
- 18% said that filters block too much or get in the way
- 11% said they don’t block enough.
- 17% said they were too complicated to use
- 7% said that their child could find a way to get around the filters.
- A quarter chose to use more localised types of controls:
- changing the settings on the child’s device to stop apps being downloaded or in-app purchases (26%)
- using ‘safe modes’ within platforms to restrict access to inappropriate online content, such as Google SafeSearch or the TikTok Family Safety Mode (24%).
There is also this report by Yonder Consulting for Ofcom which found
- 6 in 10 (60%) children aged 8 to 12 who use these large social media platforms are signed up with their own profile.
- Up to half had set up at least one of their profiles themselves, while up to two-thirds had help from a parent or guardian.
- half (47%) of children aged 8 to 15 with a social media profile have a user age of 16+,
- 32% of children aged 8 to 17 have a user age of 18+.
- Among the younger, 8 to 12s age group, the study estimated that two in five (39%) have a user age profile of a 16+ year old, while just under a quarter (23%) have a user age of 18+
Overall, this UK research paints a picture of low awareness and low adoption amongst parents of the parental controls which are available, and when they are used, parents tend to adopt the lightest touch options.