A recent survey has cast significant doubt on the effectiveness of Australia’s landmark decision to ban social media for children under 16. Despite the legal restrictions implemented last December, data suggests that a vast majority of teenagers are finding ways to bypass the rules, leaving the core issue of online safety largely unaddressed.
The Reality of the “Digital Gap”
According to new findings from the Molly Rose Foundation, the ban has not acted as the decisive barrier many hoped for. The survey, which polled 1,050 children, revealed several troubling trends:
- Widespread Access: Approximately two-thirds of 12-to-15-year-olds who were using social media before the ban still maintain active accounts.
- Platform Penetration: About 50% of those surveyed still access major platforms like TikTok, YouTube, and Instagram. When including Facebook and Snapchat, the number of active users rises to nearly 66%.
- Ease of Circumvention: Perhaps most significantly, 70% of children reported that it is “easy” to bypass the age restrictions.
- Perceived Safety: Over half of the participants stated that the ban has made no measurable difference to their actual safety online.
Why the Ban is Struggling
The failure of the ban to curb usage points toward a systemic issue in how social media companies manage age verification. Rather than preventing underage access, the current systems appear to be easily manipulated.
Australia’s eSafety Commissioner has already identified “major gaps” in how platforms like Meta, YouTube, and TikTok are implementing these rules. The Commissioner noted that children are often able to repeatedly attempt age verification processes until they successfully trigger a “16+” result, effectively tricking the system.
This suggests that the burden of enforcement has shifted to the platforms, but without robust technological safeguards, the ban acts more as a “paper tiger” than a functional shield.
A Warning for the United Kingdom
The findings have immediate implications for the United Kingdom, where the government is currently consulting on similar safety measures. The Molly Rose Foundation has warned that the UK should avoid rushing into an “Australia-style” ban, calling it a “high-stakes gamble.”
Andy Burrows, head of the foundation, argues that while proponents see bans as a “firebreak” to stop harm, the early evidence suggests they may actually let tech companies “off the hook.” By focusing on age limits rather than platform design, regulators may be ignoring the root cause of the problem.
The Shift Toward Design Regulation
The consensus among safety advocates is shifting from who uses the apps to how the apps function. The Molly Rose Foundation suggests that true safety requires:
- Regulating Business Models: Moving away from models that prioritize engagement and profit over user wellbeing.
- Addressing Addictive Design: Tackling the specific features—such as infinite scrolls and algorithmic loops—that make platforms difficult for minors to use responsibly.
- Strengthening Oversight: Ensuring platforms are held accountable for the actual safety outcomes, not just the presence of age-gate checkboxes.
“The cost is too high to get this wrong by rushing into an Australia-style ban that offers the perception of security but is letting children down in practice.” — Ian Russell, Chair of Molly Rose Foundation
Conclusion
The Australian experience suggests that simply banning under-16s from social media is insufficient if platforms do not implement rigorous, foolproof age verification. For regulators in the UK and elsewhere, the lesson is clear: meaningful safety requires tackling the addictive design of the platforms themselves rather than just policing the age of the users.
































