
Australia is a leading jurisdiction in terms of online child safety regulation, with age assurance becoming central to how digital services manage access to content and platforms. New legal obligations and industry codes are in progressively coming into force which require online services to take meaningful steps to distinguish between children and adults online where those services facilitate social interaction or expose users to age-restricted material.
Policymakers argue that these changes are not about banning lawful content for adults. Instead, they argue they reflect a shift toward risk-based safeguards designed to reduce children’s exposure to online harms while balancing privacy, proportionality and technical feasibility. Under Australia’s evolving framework, responsibility for age assurance sits squarely with platforms and service providers, not with users or parents, and the laws avoid mandating specific technical solutions, which will allow for innovation and development in this sector.
Social Media Minimum Age (SMMA)
Since 10 December 2025, specified social media platforms (initially just ten) are legally required to take “reasonable steps” to prevent people under the age of 16 from creating or continuing to hold accounts. Platforms that fail to comply may face civil penalties of up to AUD $49.5 million.
To meet this requirement, specific platforms are expected to implement age assurance measures. The legislation does not mandate specific technical solutions. Age assurance may include age verification, age estimation or age inference methods, provided the method is effective and proportionate. Self-declaration is explicitly not sufficient on its own. The regulator, the e-Safety Commissioner, has stated that her intent is to ensure platforms can reliably distinguish users under and over 16, while balancing privacy and data minimisation.
The SMMA applies only to services whose primary purpose is enabling users to interact, share content and communicate socially online. It does not apply to standalone messaging services, online games, education platforms or health services. Platforms considered in scope at the outset included Facebook, Snapchat, X, Twitch, Instagram, YouTube, Kick, Reddit, Threads and TikTok. This list is not fixed and may change over time, with a formal review scheduled for June 2026.
Regulatory and privacy guidance for implementing age assurance under the SMMA has been issued by the Office of the Australian Information Commissioner (OAIC), setting expectations around lawful collection, use and protection of personal information. This should be read in conjunction with the “Reasonable Steps” guidance issued by e-Safety.
Industry Codes under the Online Safety Act 2021
Australia’s online safety framework began with plans for a set of mandatory industry codes, drafted by industry but overseen by the eSafety Commissioner, developed in two phases under the Online Safety Act, with the SMMA being added after this process was well underway to add social media to the picture.
Phase 1, focuses on protecting users from the most harmful and illegal online content, including child sexual exploitation material and terrorist content. These codes require services to rapidly remove such material and maintain effective complaint-handling and reporting processes. These codes are already in effect.
Phase 2 is specifically designed to reduce children’s exposure to lawful but age-inappropriate content such as online pornography, high-impact violence, self-harm and suicide content, and simulated gambling. Age assurance is introduced as a protective mechanism for content that would generally be classified as 18+. These codes come into effect in two stages:
27 December 2025
- Search engines – identify child accounts (under 18) and lock “Safe Search” to the highest level by default.
- Hosting services – 18+ content (Class 2) is behind a “reasonable” age gate
- Internet carriage providers – offer and actively promote filtering tools to parents at the point of sale and annually.
9 March 2026
- Social Media – Protecting 16–17 year olds from adult content on feeds/DMs.
- Adult Websites – Mandatory “Hard” age verification for all users.
- AI & Chatbots – Preventing AI from generating explicit material for kids.
- Online Gaming – Filtering 18+ content in game chats and lobbies.
- App Stores – Blocking the download of R18+ and X18+ rated apps.
- Hardware/Devices – Mandatory parental control tools on phones and consoles.
The codes are technology neutral and treat age assurance as a flexible, risk-based concept rather than mandating a single technology. Services are expected to apply age verification, age estimation or age inference methods that are proportionate to the level of risk posed by their service. Higher-risk services, particularly those that provide pornography or other adult-only content, are expected to use stronger age assurance methods with a higher level of assurance, before allowing accounts to be maintained or created.
Across all Phase 2 codes, age assurance methods must be technically feasible, privacy-protective and consistent with Australian privacy law. Providers must be able to demonstrate that their approach meaningfully reduces the likelihood of children accessing age-restricted content. The full codes and supporting guidance are available on the e-Safety website.