Current Region:
Global

Alternatives to AV – Filtering Software

January 17, 2024

Since the case of Ashcroft v. American Civil Liberties Union, 542 U.S. 656 (2004) was decided by the US Supreme Court, with a 6-3 majority agreeing that the Child Online Protection Act (1998) did not pass the strict scrutiny test used to judge obscenity cases, filtering software has been cited as a less restrictive means of protecting children from otherwise constitutionally protected speech.

Justice Anthony M. Kennedy who wrote the majority opinion stated that “filters are less restrictive than COPA [the Child Online Privacy Act, 1998]. They impose selective restrictions on speech at the receiving end, not universal restrictions at the source…  The Government failed to introduce specific evidence proving that existing filtering technologies are less effective than the restrictions in COPA.”

But in the 15 years since the Supreme Court declined to hear a further appeal in support of COPA, filtering has clearly failed to deliver.  Children are today exposed far more to pornography online than they were at the start of the century.  There is more of it; there are more platforms and websites hosting it; and more devices to access it in the hands of more children at a younger age.

If filtering was to have been as successful in achieving the policy objective of preventing children from seeing obscene material online as the justices hoped, then a theory of change approach would conclude that at least three things needed to be true:

  1. Parents must know about filtering
  2. Parents must know how to operate it, and
  3. Parents must choose to use it.

While there is insufficient data to understand the extent to which this logic breaks down at each of those three stages, we do know with some certainty the net result.  The UK’s Internet regulator, Ofcom, has conducted research during 2022 which found that while 57% of parents were aware of content filters offered by their Internet Service Provider (ISP), only 27% made use of them.

While it is true that almost 70% of parents use any form of protective technology, the bar has to be lowered to include such minimal impact measures such as turning on “safe search” for search engines, or the partial protection offered by parental controls applied to hardware or software.  But that still leaves nearly one third of all children entirely unprotected from harm, and two-thirds not benefiting from content filtering.

  Awareness Usage
Parental control software set up on a particular device used to go online (e.g. Net Nanny, McAfee Family Protection, Open DNS FamilyShield). 62% 27%
Parental controls built into the device by the manufacturer – e.g. Windows, Apple, Xbox, PlayStation etc. 59% 34%
Content filters provided by your broadband internet service provider (e.g. BT, TalkTalk, Sky and Virgin Media) where the filters apply to ALL the devices using your home broadband service (also known as home network filtering) 57% 28%
Restrict access to inappropriate online content – through things like Google SafeSearch, YouTube Restricted mode or TikTok Family Safety Mode 51% 27%
Parental control software, settings or apps that can be used on your child’s phone or tablet to restrict access to content or manage their use of the device 47% 24%
Change the settings on your child’s phone or tablet to stop apps being downloaded or stop in-app purchases 46% 27%
Apps that can be installed on a child’s phone to monitor which apps they use and for how long 36% 13%
None or don’t know 8% 30%

Seven in ten parents of children aged 3-17 said they had some form of technical control in place to manage their child’s access to content online. Overall, the most-used technical controls were those that are built into the device by the manufacturer (34%). Parents were far less likely to use controls that required them to download specific software or apps. For example, only 13% of parents said they used security apps that can be installed on their child’s device to monitor the apps they use and for how long.

Ofcom: Children and Parents: Media Use and Attitudes Published 29 March 2023

But let us assume that there is a huge public awareness and education campaign.  Perhaps filtering software is discounted or offered for free to parents and it is universally adopted.  Would even this achieve the policy objective of preventing children from being exposed to adult content?

The answer is clearly not.  Whie we believe that it has a role to play, filtering software is still an ineffective single policy response for several reasons:

Overblocking and Underblocking:

Filtering software often faces challenges in accurately categorizing and blocking content.  It can rely on blacklists of harmful sites, and whitelists of harmless sites, but these need to maintained somehow.  If they are populated through self-reporting, that then needs to be supervised.  It can also rely on AI-based content moderation, but this is inevitably imprecise.  So using either approach, filters may both overblock, restricting access to legitimate and educational content, or underblock, allowing inappropriate content to slip through.

Dynamic Content

Much of the content online is user-generated, and social media platforms contribute significantly to this dynamic environment with new content constantly being created and shared. Filtering software may struggle to keep pace with the sheer volume and diversity of content in real-time, especially as creators find new ways to present material that may not fit traditional categorizations.  Only the host is in a position to access all of its content and apply effective age-restrictions.

Filtering only works where it is applied:

Filters may be applied to a child’s own device, but what about a tablet that the whole family shares?  Filtering may also be imposed at the ISP level, but what happens when the child uses the Wi-Fi at the local shopping mall?

Reliant on parental discretion:

Parents may also be persuaded by their children to remove the filter, perhaps ostensibly to play a game that is rated 18+.  Parents may not realise – or worry – that this gives their children access to pornography.  Filters are, in effect, a parental control operated at the sole discretion of parents unless they are imposed by ISPs at the network level which would impact adult users as well.  This is question of political philosophy, but some legislators may not wish to give parents the discretion to accidentally or on purpose expose children to online harms.

False Sense of Security:

Relying solely on filtering software may give parents and guardians a false sense of security. Children are often tech-savvy and may find ways to circumvent or disable filtering tools, rendering them less effective. They may simply to discover the password and it is not unusual for them to know more about technology than their parents.  For example…

Encrypted Connections and VPNs:

Many websites now use secure, encrypted connections (HTTPS), making it difficult for filtering software to inspect the content being transmitted to the user. Additionally, virtual private networks (VPNs) can be used to bypass ISP level content filters.

Just as reliant on website’s compliance:

While some degree of automated filtering is possible, as described above it is not perfect.  So just as age verification has to be applied by the adult site, the sites still need to voluntarily label themselves as Restricted to Adults to allow filters to block them.  There is no shift away from the responsbility of adult sites to cooperate, and some sites could still refuse to do so.

Customization and Personalization:

Filtering software often comes with default settings that may not align with a family’s specific values or requirements. Parents may need to customize these settings, but doing so requires a good understanding of the software, and some may find it challenging to set up the filters appropriately.

In summary, while filtering software can play a role in protecting children from adult content online, it is not a foolproof solution.  So, if and when the Supreme Court revisits the question of age verification, 20 years of new evidence from the real world will need to be considered before reaffirming any conclusion that filtering is effective.  Filtering will, again, be balanced again against the burden of age verification imposed by the publisher which has also evolved greatly in that time period to be far more straightforward, more reliable, cheaper and privacy-preserving.  And when the facts change, the Supreme Court’s judgement may well change too.

AVPA responds to Ofcom Consultation

AVPA responds to Ofcom Consultation

The AVPA has responded to Ofcom's consultation on "Protecting children from harms online". The Association has highlighted the risks arising from: Not defining clearly what level of accuracy is required of "Highly Effective Age Assurance" and Not requiring any attempt...

How California failed to act to protect children from porn

How California failed to act to protect children from porn

We recently had to withdraw our support from AB 3080 as a result of an Amendment accepted by the California Senate's Judiciary Committee.  We had already given evidence in person as expert witnesses proposing the Bill to both the Assembly Privacy and Consumer...

US State Law comparisons for adult content 2024

In light of the numerous new laws passed in the USA recently, we've updated our comparison chart. As of June 2024, there have been 21 new laws, detailed below in alphabetical order, in three tables. 1. Alabama, Arkansas, Florida, Georgia, Idaho, Indiana and Kansas 2....