Below is a UK-first, fully updated rewrite of the social media platforms page, aligned with current law and enforcement practice. A new Australia-specific section is added at the end as requested. EU and US material is kept concise and clearly secondary.
“Some of the biggest risks to children arise on social media platforms, video and music streaming services, and video gaming platforms. In these sectors, children’s personal data is used and shared to shape content, recommendations, and features. This can expose them to inappropriate advertising, unsolicited contact, and design choices that encourage excessive use. These practices can create physical, emotional, psychological, and financial harms.”
Stephen Bonner, Executive Director, Regulatory Futures and Innovation
Information Commissioner’s Office
Why social media platforms are high risk
Social media platforms combine large-scale user-generated content, recommendation algorithms, messaging, advertising, and monetisation of attention. The ICO has explicitly identified social media as high-risk for children, particularly where platforms rely on profiling, engagement optimisation, or social interaction.
Unless a platform can demonstrate that all content and functionality are suitable for users of any age, operators should assume that children are present and design accordingly. In practice, this requires age assurance.
UK legal framework
Online Safety Act 2023
The Online Safety Act 2023 is now the primary regulatory framework governing social media platforms in the UK.
It applies to user-to-user services, defined broadly as services that allow users to encounter content from other users. This includes all mainstream social media platforms and many community, messaging, and content-sharing services.
Where a service is likely to be accessed by children, platforms must:
• Assess the risk of harm to under-18s
• Prevent children from encountering content that is harmful to them
• Apply age assurance where necessary to apply protections effectively
• Ensure that adult-only content is not accessible to children
Age assurance is required wherever it is necessary to distinguish adults from children in order to apply safeguards.
The regulator, Ofcom, can impose enforcement notices, access or service restriction orders, and fines of up to £18 million or 10% of qualifying worldwide revenue, whichever is higher.
Below is a targeted update you can insert cleanly into the social media platforms page. It adds the missing but important point that platforms are legally required to apply and enforce their own minimum age terms, not merely state them.
You can place this after the UK GDPR and children’s data section or immediately before What this means in practice.
Duty to apply and enforce minimum age terms
Most social media platforms set a minimum age for users in their own terms and conditions, commonly 13 or 16 depending on jurisdiction and service design. Under UK law, it is not sufficient to state a minimum age. Platforms are expected to take reasonable and proportionate steps to apply and enforce their own age rules.
Where a platform’s terms prohibit children below a stated age from holding an account, allowing underage users to register and remain active undermines compliance with multiple legal regimes, including:
• UK GDPR, where consent may be invalid if the user is below the applicable digital age of consent
• The Age Appropriate Design Code, which requires services to identify whether users are children in order to apply appropriate safeguards
• The Online Safety Act 2023, which requires platforms to assess and mitigate risks to children based on how the service actually operates, not how it is described in terms
In practice, if a platform does not know whether a user meets its own minimum age requirement, it cannot credibly rely on its terms as a compliance mechanism. Regulators increasingly look at actual user outcomes, not formal policy statements.
This means that, for many platforms, age assurance at account creation or before access to core features is necessary in order to:
• Determine whether a user is permitted to hold an account at all
• Apply child-specific protections where children are allowed
• Demonstrate that the platform is enforcing its own rules
Failure to enforce stated minimum age terms may be treated by regulators as evidence of ineffective risk management rather than as a neutral omission
UK GDPR and children’s data
Under UK GDPR, children’s personal data is afforded enhanced protection. Personal data includes IP addresses, device identifiers, inferred interests, and behavioural signals.
If you rely on consent as a lawful basis for processing personal data, Article 8 requires users to be old enough to give valid consent. In the UK, the digital age of consent is 13.
Where platforms do not know a user’s age or location, they cannot know whether consent is valid. As a result, age assurance is often necessary at account creation to establish whether consent can be relied upon at all.
Regulators are particularly concerned about:
• Profiling and recommendation systems applied to children
• Targeted or personalised advertising involving children
• Design patterns that nudge children to remain online
Enforcement and penalties
The ICO may issue assessment notices, enforcement notices, and administrative fines of up to £17.5 million or 4% of global annual turnover, whichever is higher.
Age Appropriate Design Code
The Age Appropriate Design Code, also known as the Children’s Code, applies to any online service likely to be accessed by children under 18 that processes personal data.
For social media platforms, this includes obligations to:
• Assess how algorithms, feeds, and interaction features affect children
• Minimise data use, profiling, and nudging where children are present
• Apply age-appropriate defaults and safeguards
The Code is fully in force and actively enforced.
What this means in practice
Social media platforms serving the UK market should assume that children are present unless they have strong evidence to the contrary.
To comply with UK law, platforms may need to:
• Determine whether a user is a child or an adult
• Apply different content rules and defaults based on age
• Restrict adult or harmful content to adults only
• Limit or disable certain social features for children
The appropriate level of assurance depends on the nature of the content, functionality, and risks involved. However, given regulatory expectations and reputational risk, we generally recommend at least a standard level of age assurance for social media platforms.
Australia
Australia has now introduced a statutory minimum age of 16 for social media use. Social media platforms are legally required to prevent under-16s from holding accounts, not merely to state age limits in their terms and conditions.
The regime is enforced by the eSafety Commissioner, which has powers to issue compliance notices, impose civil penalties, and require platforms to demonstrate how they are enforcing age restrictions in practice.
Key implications for platforms include:
• Platforms must take reasonable and effective steps to identify users under 16
• Reliance on self-declaration alone is not sufficient, nor can a new account be created without an age check and left to operate for a while until there is sufficient data to infer age.
• Platforms must be able to show that their age assurance approach is effective at scale
Australia’s approach is explicitly outcome-focused. Regulators assess whether under-16s are actually prevented from accessing social media services, rather than whether age limits exist on paper.
For global platforms, Australia now represents a hard regulatory floor. Any platform claiming to enforce a minimum age but lacking effective age assurance should expect regulatory scrutiny.
PLEASE NOTE
This website does not constitute legal advice. You should always seek independent legal advice on compliance matters.