Current Region:
Global

Federal Trade Commission Workshop summaries

February 3, 2026

Below are summaries of the presentations and panel’s from last week’s FTC workshop. (Please note these have been in part machine-generated, so you are advised to refer to the video of the full proceedings to confirm exactly what was said by any speaker) which is available from the FTC website event page.  (Speakers are also welcome to contact us to amend the text below if it misrepresents them)


Andrew Ferguson, Chairman of the FTC

“By adopting robust age verification technologies, internet companies can demonstrate by deeds, not words, their commitment to our nation’s laws, and more importantly to our nation’s parents and the protection of their children.”

Chairman Ferguson opened the event by framing it within the FTC’s role in enforcing the Children’s Online Privacy Protection Act (COPPA), specifically around the requirements for notice and verifiable parental consent, and the development of age verification technology in relation to COPPA.
He referenced recent COPPA enforcement actions, including cases involving anonymous messaging apps and Disney. The Disney case was particularly notable for criticising the company for failing to label videos as “made for kids” on YouTube, which would trigger data protection measures under COPPA. Ferguson noted that Disney was offered an opportunity to reduce its compliance burden by implementing age verification technology on YouTube.
Ferguson stressed that there should be no inherent tension between technological innovation and protecting children online. He broadened the discussion to also include issues like gambling and pornography, suggesting that some services feared that age verification could reduce their access to adult user data. He referred to the Paxton v. FSC case, where concerns were raised about the potential negative impact on businesses’ customer and profit bases due to the implementation of age verification. Ferguson emphasised that public opinion and voters overwhelmingly favour protecting children, a stance supported by both the FTC and the Trump Administration.
Finally, Ferguson concluded by noting that COPPA should not be a barrier to the adoption of age verification technology. The goal of the workshop was to better understand how age verification and COPPA interact, with the intention of developing a policy statement and possibly amending COPPA rules to facilitate the use of age verification technology.


Panel 1 – Understanding the Landscape – Why Age Verification Matters

Mark Smith, Senior Privacy and Data Policy Manager, Centre for Information Policy Leadership (CIPL)

Mark opened, referring to CIPL’s 2024 paper on age verification and the work they did to facilitate industry and regulator discussions. Mark framed age verification as part of “age assurance,” which includes self-declaration, age estimation, and age verification. He explained that most US laws only require confirming if someone is above or below a certain age threshold (typically 18), not precise age verification. He highlighted the rapid growth and fragmentation of state laws addressing harmful content, social media access, and child data governance, stressing common expectations like data minimisation and retention limits. Mark emphasised that age assurance should be a continuous, risk-based process that’s privacy-preserving and supported by interoperable credentials and clear role allocation across the ecosystem.

Amelia Vance, Founder and President, Public Interest Privacy Center
Amelia argued that many child protection laws implicitly require age assurance, even when not explicitly mandated. She noted that digital services are unavoidable for children, making it crucial for them to be developmentally appropriate. She pointed to the Paxton decision, recognising that age verification technology has matured and may be constitutionally permissible. Amelia warned that identifying a child is just the first step; meaningful protections are needed, and she raised concerns about over-retention of data due to audit and enforcement fears.

Michael Murray, Head of Regulatory Policy, UK Information Commissioner’s Office (ICO)
Michael began by pointing out that children benefit from being online, but do not have an age appropriate experience, with 53% of 8-12 year olds having social media accounts, and a fifth of them appearing to be an adult.  62% of 13-17 year olds encounter harm. Michael reaffirmed that self-declared age is ineffective, as it’s widely manipulated. He explained the UK’s dual regulatory approach with Ofcom overseeing online safety and ICO regulating data protection. He emphasised that age assurance is central to both the Children’s Code and UK online safety, promoting layered methods of age verification. Michael highlighted the global trend toward ensuring services know when children are online and applying necessary protections, with principles like data minimisation, accountability, and human oversight gaining international traction.

Bethany Soye, South Dakota State Representative
Bethany framed age assurance as a provider responsibility, similar to offline age-restricted products. She criticised the reliance on parental policing alone, positioning pornography as a public health issue. South Dakota has moved away from rigid content thresholds to a “regular course of business” test for age verification, especially for adult content. She supported technology-neutral definitions of reasonable age verification and emphasised that laws should minimise accidental exposure for younger children. She highlighted the App Store Accountability Act and stressed that state legislation is needed to clarify legal standards and push forward public policy momentum.

In the Q&A, there was a more general discussion.  Michael highlighted the risk of relying on profiling (inference) as this could lead to illegal processing until the age is determined,  He also warned not to let part of the process be handled on non-specialist systems (referring presumably to DISCORD).


Mark Meador, FTC Commissioner

“Age verification offers a better way. It offers a way to unleash American innovation without compromising the health and well-being of America’s most important resource: its children. It is grounded in practices of responsibility and stewardship that extend across our entire history. It is a tool that empowers, rather than replaces, America’s parents. Really: I don’t know that we can afford to forgo it.”

Commissioner Meador argued that the concept of children as “digital natives” obscures the reality that today’s online environment has been deliberately shaped by commercial decisions that expose children to harm. He suggested a more accurate framing is that young users have become “digital subjects” of large-scale behavioural and economic experimentation.

Meador linked the rise of smartphone-centred platforms with sharp increases in youth mental-health harms, citing significant post-2010 growth in adolescent suicide, self-harm, depression and anxiety. While acknowledging that correlation is not causation, he described the pattern as deeply troubling and indicative of systemic failure rather than moral panic.

He rejected claims that age-based protections amount to censorship or authoritarianism, noting that society has long accepted age checks for alcohol, tobacco, films and video games. In his view, online services should not be treated differently simply because harm is digital rather than physical.

Addressing common criticisms, Meador dismissed the argument that age verification undermines parental responsibility, likening it to objections that ID checks at shops should be replaced by “better parenting”. He also challenged claims that age checks must be intrusive or burdensome, highlighting the emergence of privacy-preserving, third-party age verification systems that confirm eligibility without disclosing personal data to platforms.

Finally, he pointed to behavioural and AI-assisted age estimation as a promising area for innovation, arguing that the US should lead in developing responsible technologies that protect children while supporting market innovation. He framed age verification as a tool that complements parents, limits data exploitation and reflects long-standing societal norms around protecting minors online.

Full text here


Panel 2 – Age Assurance Technologies – Methods, Risks and Design Choices

Jim Siegl, Senior Fellow, Future of Privacy Forum (FPF)
Jim opened by explaining this infographic the FPF has created and recently updated. Jim Siegl introduced age assurance as a spectrum of methods rather than a single technology, ranging from self-declaration to high-assurance verification. He emphasised that the goal is age placement, not identity disclosure, and that methods should use thresholds or age bands. Jim argued for proportionality, balancing the level of assurance against privacy risks and user friction. He outlined four main categories: declaration, inference, estimation, and verification, each suitable for different levels of risk. He highlighted the importance of layered or waterfall approaches, escalating only when necessary, and emerging concepts such as age signals, reusable credentials, and double-blind architectures. Jim acknowledged that no method is perfect, but many are significant improvements over self-declaration, which is easily bypassed.

Iain Corby, Executive Director, Age Verification Providers Association (AVPA)
Iain reiterated that age verification is not identity verification and should not require users to disclose their identity. He discussed the origins of third-party age verification, which emerged to protect user privacy in sensitive sectors. Iain emphasised core principles like data minimisation and immediate deletion, which are now embedded in many laws. He highlighted the growing regulatory importance of double-blind age verification, where neither the website nor the provider can see the full picture. Iain demonstrated a variety of available methods, such as document verification, facial age estimation, metadata inference, authoritative data, and behavioural signals. He argued for the importance of choice for users and regulators, with a layered enforcement model that includes service-level age checks alongside app-store or OS-level measures. Iain also pointed out that interoperability and reusable age tokens could reduce friction while improving privacy.

Denise G. Tayloe, CEO and Co-Founder, PRIVO
Denise drew a clear distinction between age verification and verifiable parental consent, warning that they are often conflated. She emphasised the unresolved challenge of verifying the relationship between parents and children. Denise highlighted the risk of children misusing reusable credentials created at home without parental awareness, and suggested that methods such as work-based email verification could be safer outside the home context. She warned that privacy risks persist after the age gate, particularly if children gain persistent access. Denise advocated for leveraging existing offline trust relationships, like schools, to support parent verification. She framed trust, transparency, and informed choice as essential for the long-term adoption of age assurance systems. Denise also called for a law against parents lying about their children’s ages.

Rick Song, CEO and Co-Founder, Persona
Rick took a technology-agnostic stance, emphasising that the right method depends on the risk, audience, and use case. He identified three key performance metrics for age verification: coverage, assurance, and usability. Rick stressed that the aim is meaningful improvement over self-attestation, not perfection. He distinguished between age gating and providing age-appropriate experiences, each facing different threat models. Rick discussed common circumvention risks, such as borrowed IDs, parental impersonation, VPN use, deepfakes, and account selling, and explained how liveness checks, binding, risk signals, and continuous verification can mitigate abuse. He noted that protecting child-only spaces is more difficult than blocking adult content due to sophisticated attackers, and he supported strong data minimisation, particularly for biometric data, with rapid deletion.

Sarah Scheffler, Assistant Professor, Carnegie Mellon University, CyLab
Sarah shared a study on user preferences for methods of age assurance. Sarah emphasised that privacy is a core requirement in most US age verification laws, not optional. She presented research showing that users are uncomfortable with many age verification methods, particularly due to concerns over surveillance, tracking, identity theft, and secondary data use. Sarah highlighted risks posed by metadata and linkable identifiers, even when names or IDs are not stored. She discussed cryptographic approaches, such as selective disclosure and zero-knowledge proofs, as promising steps forward but noted they are not sufficient on their own. Sarah urged policymakers to consider the cumulative privacy impact of age verification infrastructures.


Panel 3 – Navigating the Regulatory Maze of Age Verification

Katherine Hass, Director, Consumer Protection Division, Utah Department of Commerce
Katherine supported age verification laws, acknowledging that while the long-term impacts are still emerging, such laws are necessary and effective. Katie praised the IEEE standards, rather than the state prescribing a method.  The goal was a level of accuracy, and they did not want to stifle technology.
Katie noted that most platforms are data mining companies who know their customers, she highlighted that AI can ‘trust but verify’ users easily. She likened children’s online access to entering age-restricted physical spaces, asserting that platforms have a duty of care. She emphasised Utah’s focus on child flourishing, parental involvement, and protection from harms such as pornography, social media algorithms, AI chatbots, and in-app purchases. Katie advocated for full exclusion of certain categories, like pornography, and tailored experiences with parental consent for others. She argued that Utah’s laws are narrowly targeted at real harms, not blanket restrictions, and stressed the need for the FTC to help identify dependable technologies, while allowing innovation to progress.

Jennifer Huddleston, Senior Fellow, Technology Policy, CATO Institute
Jennifer raised concerns about the privacy and security risks of AV, particularly the creation of large datasets containing sensitive or biometric information. She warned that these systems could lead to child identity theft in the long term. Jennifer argued that parents, not lawmakers, are best suited to decide what is appropriate for their children, and she advocated for empowering parents through digital literacy, education, and parental controls, rather than relying on mandatory age verification laws. She also pointed out the potential for these laws to block access to beneficial online resources for young people and emphasised the need for a nuanced, case-by-case approach.

Clare Morell, Fellow, Bioethics, Technology and Human Flourishing Program, Ethics & Public Policy Center
Clare framed AV laws as empowering parents by restoring their oversight, contrasting them with ineffective self-declared age methods that are easily bypassed. She emphasised that individual parents cannot address the collective harms posed by social media and pornography exposure, and argued that AV is the mechanism that makes age limits meaningful online. Clare highlighted the importance of defining platforms more broadly than current US laws, which often have narrow size criteria. She proposed that AV laws should balance privacy protections, effectiveness, and flexibility, with age verification for certain services (18+ for porn, 16/18 for social media) and multiple methods for verification beyond government IDs. Clare highlighted that parental controls don’t work on the school bus when other kids share content from their unrestricted phones.

Sara Kloek, VP of Education and Youth Policy, Software & Information Industry Association (SIIA)
Sara shifted the discussion to what happens after age verification is completed, questioning what content is being restricted and how decisions are made. She acknowledged that AV is necessary for adult sites but argued that for other sites, it depends on the context. Sara emphasised the importance of a risk-based approach, distinguishing between high-risk services (like adult content) and lower-risk services. She noted that while AV could be a useful tool for child protection, it must be balanced with data minimisation, accessibility to information, and clarity for industry implementation. Sara also suggested that Congress needs to pass a federal privacy law to provide a clearer framework for AV.


Panel 4 – Deploying Responsible Age Verification at Scale

Emily Cashman Kirstein, Child Safety Policy Manager, Google
Emily explained how Google uses user activity to infer age and determine whether someone is over or under 18. She emphasised that age assurance should not be a one-size-fits-all solution, as the approach must be tailored based on risk. Google provides age signals to app developers, but she argued that app developers should remain responsible for identifying risks in their services. Emily advocated for app stores to only require age signals from apps that need them, and for users accessing apps via websites to be considered as well. She stressed that adopting safety features should be the path to reducing liability while ensuring that age information is shared only with apps that require it.

Nick Rossi, Director, Federal Government Affairs, Apple
Nick Rossi highlighted Apple’s “Ask to Buy” system, which allows parents to approve or disapprove app purchases, and discussed their privacy-protecting age verification system that requires parental approval. He emphasised Apple’s application of data minimisation principles and their reliance on parental attestation for age verification. Nick responded to claims from Meta regarding age checks by default, stating that Apple’s age verification system is designed for privacy and is not limited to in-app purchases but also extends to app downloads. He discussed how the level of age verification required should vary depending on the service, drawing an analogy to different levels of checks for entering a mall versus a liquor store.

Antigone Davis, VP and Global Head of Safety, Meta
Antigone discussed Meta’s efforts to ensure teens who change their age on the platform must now prove their age. She described Meta’s use of AI for age verification and their partnership with the OpenAge Initiative. Antigone called for legislation requiring app stores to implement age checks at the store level, especially as teens use multiple apps (approx.40), making it difficult for parents to manage. She also advocated for a mechanism to allow parents to block access to certain apps and for general apps to prevent under-13s from accessing them. Antigone emphasised the role of in-app purchases, which are already parent-approved, in facilitating a safer digital environment. She pointed out that if only age-differentiating apps are required to do age verification, it would discourage sites from adopting age differentiation.

Graham Dufault, General Counsel, The App Association
Graham noted that some members of The App Association do not use AV as their apps do not have age-related risks, such as apps for managing pigs. He expressed concerns about app stores providing age information to developers, fearing it would require compliance with COPPA. Graham acknowledged that age assurance is necessary for certain apps, like those for gambling, porn, and e-commerce for age-restricted goods, but not for all apps. He discussed how ensuring the accuracy of age information is challenging, referencing ISO 27566 for classification accuracy, and pointed out that 51% of kids use parental controls, which can be an effective tool.

Amy Lawrence, Chief Privacy Officer and Head of Legal, SuperAwesome
Amy Lawrence, from SuperAwesome, emphasised the importance of age verification for ad networks targeting under-18s. She explained that companies need to know the age of users to ensure ads are appropriate and comply with legal standards. She acknowledged that some companies might not realise they are attracting kids, which can lead to unintended data collection risks. Amy argued that AV can help reduce legal risks and prevent companies from unknowingly violating privacy laws. She pointed out the need for a clear decision tree to handle conflicting age signals, and more guidance on which signals should override others.

Robin Tombs, CEO and Co-Founder, Yoti
Robin Tombs discussed Yoti’s services, focusing on age verification and cross-jurisdictional challenges. He pointed out that adult services are usually accessed through websites, and that parents often input incorrect age signals. Robin shared that Yubo has completed 100 million checks, leading to fewer bots and greater user trust. He noted the different requirements in various jurisdictions, with countries like the UK requiring a single check while others, like Italy and France, mandate daily checks. Robin emphasised the growing importance of privacy-preserving techniques and on-device checks, as technology advances to offer more secure solutions. Robin noted that the tech is moving forward to allow privacy preserving techniques and on device checks. He cited Germany’s reduction of the buffer age, which allowed a 3 year buffer, down from 5 years. Robin also highlighted NIST testing of FAE to help businesses without expertise to select good vendors.


Christopher Mufarrige, Director, Bureau of Consumer Protection

“I encourage those of you listening today, and the policy, business, and research communities as a whole, to advance empirical work to support the adoption of age verification technologies for the protection of children online.”

The FTC reiterates that online child protection is its most critical consumer protection work. The goal is to enforce COPPA (Children’s Online Privacy Protection Act) vigorously to ensure parents, not tech companies, remain the primary decision-makers for their children’s digital lives. The Commission highlighted several recent actions against companies allegedly failing to protect minors:

  • Send It: For unlawful data collection from children via anonymous messaging.
  • Disney & Aperion: Settlements regarding COPPA violations.
  • Pornhub: Utilizing Section 5 (Unfair/Deceptive Acts) to crack down on the mishandling of non-consensual content and child safety.

The most insightful part of the speech addresses a major hurdle: Age Verification (AV) technologies often require collecting personal data to work, which technically violates COPPA if done before parental consent.

The FTC’s stance: COPPA should be a bridge, not a barrier, to protective technology. They are currently seeking ways to resolve this legal inconsistency.

The FTC is also calling on the research and business communities to provide more empirical evidence on AV methods. They need to know which technologies are the most accurate, hardest to circumvent, and least invasive before a wider adoption is mandated.

Full text here