Current Region:
Global

USA – why Bills targeting obscene content will survive constitutional challenges

April 15, 2024

While one federal judge decided that the age assurance requirements of the California Age-Appropriate Design Code Act (CA AADC) may be unconstitutional, the majority of more senior judges, sitting at the US Fifth Circuit Court of Appeals, reached the opposite conclusion on a Texan Act (HB 1181), also requiring age verification but only for pornography.

Far from being a confusing contradiction, the rulings in these two cases provide great insight into why it is relatively easy to draft a law to prevent children seeing pornography online, but requires more care to overcome challenges presented by the US Constitution when tackling the wider range of potential harms from social media.  While we believe careful drafting can create constitutionally valid laws which provide proportionate protections for children in both these use cases, it is certainly an easier task to craft valid laws that focus exclusively on pornography.

A federal court in the Northern District of California had applied what is known as a “strict level” of scrutiny to the CA AADC.  This often arises where the question of freedom of speech under the First Amendment is at stake, and dictates that a legislature must have passed a law to further a “compelling governmental interest,” and must have narrowly tailored the law to achieve that interest. By contrast, the Fifth Circuit stated that the less demanding “rational basis” could be used when regulating obscene content, drawing on precedents from previous Supreme Court cases.  That said, this appeals court went further and concluded that, even if the strict standard was applied for pornographic websites, its requirements would still be met by the narrow tailoring of the Bill.

So, what explains these legal differences?

Primarily, it is because the CA AADC was much broader in scope, not just addressing the obscene content to which the Texan Bill (and many others) limits itself.  There is already a widely-recognised definition for obscene content which, thanks to a previous Supreme Court decision, government is entitled to restrict to protect children.

The tight definition of what type of material is in scope was important in differentiating the Bill from the Communications Decency Act (CDA)​ which was struck down by the Supreme Court​

“(3) The CDA did not specifically define the proscribed material; H.B. 1181 does”​

“the [Supreme] Court relied principally on the CDA’s overbreadth and lack of adherence to the Miller standard.” ​

The ability for parents to enable their children to circumvent the law is helpful

“Parental participation or consent could not circumvent the CDA; it can circumvent H.B. 1181”​

Explicit references to a viable age verification process is important ​

“the [Supreme] Court relied at least in part on “the absence of a viable age verification process,” but that process is the central requirement of H.B. 1181”​

The Texas ruling clearly differentiates between adults and children and allows for action to protect minors (even if that action creates a burden on adults)​.  The Appeals Court also dismissed the argument that privacy concerns outweighed child protection​, and improvements in age verification technology are also important.​

“the [Supreme] Court’s decision was fundamentally bound up in the rudimentary “existing” technology of twenty-seven years ago, but technology has dramatically developed.”

The fact that the Bill included three broad categories of methods of age assurance was cited so was helpful​. In particular, the judges noted that at least one method of age verification was no more privacy-invading than someone visually checking the age of customers entering a bar.

“At least one of those options will have no more impact on privacy than will in-person age verification à la Ginsberg”.

So are bills which seek to protect children from the harms of social media fatally wounded by the application of these arguments.  We do not believe so if lessons are learnt from these two cases.

First of all, protections need to be narrowly targeted:  Not all aspects of social media are harmful to children – indeed, given most youngsters access the Internet almost entirely in this way, all the benefits of knowledge, community, and expression generated by the worldwide web are channelled through social media.  Banning social media for children outright would be a clear breach of the First Amendment.

Early attempts to curb access by minors have tended to focus on the use of children’s data without parental consent.  That has been problematic, firstly because this is an option which prevents access all together, making a first amendment right contingent on a parent’s approval. Secondly, there is the argument that federal law has already settled the age below which a child must secure verifiable parental consent to be 13, so the Children’s Online Privacy Protection Act (COPPA) overrides, or in legal language, pre-empts any state laws that try to raise that to 16 or even 18.

But there is indisputably harmful content to be found on many social media platforms. And that content may be ​served to children in greater quantities, thanks to the functionality of the service, which will often seek to promote content it believes will retain users longer, without always caring whether that content is safe for minors.  It is hard to believe the Founding Fathers meant to prevent their descendants from protecting children, so how would they allow it to be achieved?

The easy place to start in any Bill addressing social media is to require age assurance before a minor can access adult content that meets the Miller Test within the platform.  The Texas judgement should apply comprehensively, particularly given its statement that even if strict scrutiny is applied, the remedy of age assurance is narrowly enough tailored to be upheld.

There is then a need for a further precedent to extend the logic of the Miller test to other content that courts can be persuaded is at least as harmful to children as pornography, particularly when amplified by the platform’s functionality.  Is detailed guidance on how to take your own life more harmful than porn?  Or how to cut yourself repeatedly as you self-harm?  What about content designed to encourage viewers to develop an eating disorder, or take diet medication untested or uncertified for children, such as diet pills.  Will courts conclude that these are less harmful for children than adult content?  It seems unlikely.

After applying the other lessons from Texas, such specifying a range of options for age assurance, including privacy-preserving estimation, and even allowing parents to override these precautions if they wish (although one would hope Child Services may take an interest in many such cases), this must surely have a strong chance of withstanding challenge.

The Fifth Circuit opinion may be challenged by another regional appeals court, and this will then prompt the Supreme Court to adjudicate.  Both law and politics give laws narrowly focused on pornography strong odds of success.  So far, laws aimed at social media have not even made it as far as the appeals stage, but if they adopt the lessons from Texas, and target only the harm within the platforms, not the platform itself, then they may also survive their day in court.

Privacy; a foundational concept for age verification

Privacy; a foundational concept for age verification

Perhaps the most frequent concern raised about age verification is a risk to privacy.  But the essence of age assurance is the ability to prove your age online WITHOUT disclosing your identity.  Our industry would not exist were there not the absolute need to preserve...