Current Region:
Global

New Zealand

New Zealand does not yet have a single, comprehensive “online safety” statute that drives universal age assurance across platforms. Child protection online is addressed to a limited extent through a patchwork of laws covering harmful communications, content classification and privacy. At the same time, there is active political and policy debate about stronger regulation, including proposals for a minimum social media age of 16 and for a new regulatory framework for online services.

National legal framework and regulators

Harmful Digital Communications Act 2015 (HDCA)
The HDCA is New Zealand’s main statutory mechanism for addressing online harm such as cyberbullying, harassment and intimate image abuse. It focuses on harm to individuals rather than age gating content. Netsafe is the “Approved Agency” that receives and assesses complaints and works with platforms and service providers to resolve them, with escalation routes that can include the District Court and Police.

Content classification and pornography regulation
New Zealand regulates objectionable and restricted content through the Films, Videos and Publications Classification Act 1993 and the work of the Classification Office. In practice, most pornography accessed in New Zealand is hosted offshore, so access controls are often ineffective at a national level even where domestic law expects steps to prevent underage access. The Classification Office has also published substantial research and ministerial briefings on children’s exposure to pornography, which has been an important driver of policy discussion on age assurance.

Privacy law
New Zealand’s privacy framework is set by the Privacy Act 2020. It does not set a single digital age of consent in the GDPR sense, but it does create strong expectations around purpose limitation, proportionality, security safeguards and managing higher-risk processing. In practice, this shapes how age assurance is designed and governed, particularly where identity evidence, biometrics, or persistent identifiers are used.

A developing reform agenda

Between 2021 and May 2024, the Department of Internal Affairs (DIA) led a major review of content and platform regulation, the Safer Online Services and Media Platforms review, focused explicitly on reducing content harms, particularly for children and young people. The DIA has described this as work to improve regulation of online services and media platforms and it has included discussion of new regulatory models and stronger platform responsibilities.

Minimum age and age assurance debate for social media

New Zealand’s debate has accelerated around a statutory minimum age for social media and mandatory age verification. A Member’s Bill, the Social Media (Age-Restricted Users) Bill, proposes restricting access for under 16s and would require platforms to implement age verification to prevent underage account creation. Parliament is set to debate the bill and that a broader parliamentary inquiry into the effects of social media on youth is underway, with findings expected in early 2026.

What this means for age assurance in practice today

At present, New Zealand’s age assurance “requirements” are mainly situational:

  • If your service enables conduct that causes harm to individuals, you should expect HDCA complaints handling and escalation risk, with Netsafe as the first-line statutory channel
  • If you publish or facilitate access to pornography or other restricted or objectionable content from New Zealand, you should assume regulators and policymakers expect practical steps to limit underage access, even though enforcement leverage is weaker against offshore service
  • If you process children’s data or operate a service likely to be used by minors, privacy and safety expectations will increasingly push platforms towards defensible age assurance, especially where content recommendation, messaging, or adult contact risks exist

Looking ahead

New Zealand is best characterised as a jurisdiction with limited hard age gating law today but a clear direction of travel towards stronger online child safety rules, potentially including statutory minimum-age enforcement for social media and a more formal online safety regulatory framework. For international services, the most important planning assumption is that age assurance is moving from “optional risk control” to “expected compliance capability”, and that legislative change is plausible within the next parliamentary cycle.