New Zealand
New Zealand does not yet have a single comprehensive online safety statute requiring universal age assurance across digital platforms. Online child protection is addressed through a combination of harmful communications law, content classification and privacy regulation. At the same time, there is active political and policy debate on stronger regulation, including proposals for a statutory minimum social media age of 16 and mandatory age verification.
National Legal Framework and Regulators
New Zealand regulates online harms and child protection through:
- Harmful communications law
- Content classification and censorship law
- Privacy law
New Zealand operates a multi-agency regulatory structure, led by the Department of Internal Affairs (policy leadership), alongside the Office of Film and Literature Classification, the Privacy Commissioner, and Netsafe.
Harmful Digital Communications Act 2015 (HDCA)
The Harmful Digital Communications Act 2015 is New Zealand’s primary statute addressing online harms such as:
- Cyberbullying
- Harassment
- Intimate image abuse
The Act focuses on harm to individuals rather than age-gating content.
Netsafe is the approved agency under the HDCA and is responsible for:
- Receiving and assessing complaints
- Working with platforms and service providers to resolve harm
- Escalating cases where necessary to the District Court or to the police.
Courts may issue takedown, correction, or cease-and-desist orders, including orders requiring platforms or intermediaries to remove harmful content.
Content Classification and Pornography
New Zealand regulates objectionable and restricted content under the Films, Videos and Publications Classification Act 1993, administered by the Classification Office.
Key features include:
- Classification of content as objectionable or age-restricted
- Legal restrictions on supply to minors
- Requirements that distributors and service providers take reasonable and practicable steps to prevent minors from accessing restricted material
The Classification Office has produced research and ministerial briefings on children’s exposure to pornography, which has informed policy debate on age assurance.
Privacy and Children’s Data
The Privacy Act 2020 governs personal data processing in New Zealand.
The Act does not set a single statutory digital age of consent. However, it establishes strong requirements relating to:
- Purpose limitation
- Proportionality
- Security safeguards
- Management of higher-risk processing
These principles shape expectations around age assurance, particularly where systems involve:
- Identity verification
- Biometrics
- Persistent identifiers
Services likely to be used by minors are expected to adopt safeguards proportionate to risk.
The Privacy Commissioner emphasises heightened safeguards and risk assessments where children’s data is involved.
Social Media Minimum Age Proposals
Policy debate in New Zealand has accelerated around a statutory minimum age for social media and mandatory age verification.
A Member’s Bill, Social Media (Age-Restricted Users) Bill proposes:
- Restricting social media access for users under 16
- Requiring platforms to implement age verification to prevent underage accounts
Parliamentary consideration is underway, alongside a broader parliamentary inquiry into the effects of social media on young people, with findings expected in early 2026.
Regulatory Reform Agenda
Between 2021 and May 2024, the Department of Internal Affairs (DIA) conducted the Safer Online Services and Media Platforms review.
This review examined:
- Regulation of online services and media platforms
- Measures to reduce harmful content exposure
- Protection of children and young people online
The review included discussion of:
- New regulatory models for online services
- Stronger platform safety responsibilities
- Potential age-assurance requirements
The review concluded in 2024 and government policy development toward a modernised regulatory framework is ongoing.
Current Age Assurance Expectations
At present, age assurance in New Zealand arises indirectly through sector-specific obligations rather than a universal statutory requirement.
Major platforms operating in New Zealand have adopted the voluntary Aotearoa New Zealand Code of Practice for Online Safety and Harms, committing to measures to reduce harmful content and improve user safety.
In practice:
- Services enabling harmful conduct may face HDCA complaints via Netsafe
- Providers facilitating access to pornography or restricted content are expected to take practical steps to limit underage access
- Services processing children’s data or targeting minors are expected to adopt defensible age-assurance measures under privacy and safety expectations
Looking ahead
New Zealand is best characterised as a jurisdiction with limited age verification laws today but a clear direction of travel towards stronger online child safety rules, potentially including statutory minimum-age enforcement for social media and a more formal online safety regulatory framework. Legislative change is plausible within the next parliamentary cycle.