TikTok, YouTube, Instagram, Snapchat, Facebook and other platforms were legally obliged to deny access to children in Australia this week or face fines of up to A$49.5m (£24.65m). The ban aims to protect under-16s from mental ill-health, distorted body image, misleading information and the myriad other harms caused by spending too much time on their phones.
While tech companies oppose the ban, and the practicalities of implementing and enforcing the ban remain uncertain, governments around the world will be closely watching how the experiment pans out. In the UK, under-16s are protected online mainly by the Online Safety Act 2023, enforced by Ofcom, which requires tech companies to protect children and teenagers from pornography and other harmful content and proactively take down any illegal content. But could the UK government follow Australia’s example?
Mark Jones, partner at Payne Hicks Beach, who specialises in the legalities of online safety, described Australia’s move as ‘a bold swing at a complex problem’ but warned ‘it risks becoming the digital equivalent of locking the front door while leaving every window wide open.
‘The whole scheme hinges on age verification systems that are notoriously unreliable—able to read the same teenager as 14 or 43 depending on the angle, and apparently no match for a Beyoncé filter. Once you ban something, you invite workarounds: VPNs, alternate accounts, and whatever creative loopholes young people invent next.
‘More importantly, a ban sidesteps the deeper issue of dangerous content and lax platform accountability. If we simply exile under-16s from mainstream platforms without fixing the ecosystem, we’re not creating safety; we’re simply delaying exposure until their 16th birthday.
‘In a world where kids learn, socialise, and play online, this blunt tool may look decisive, but it’s unlikely to deliver the safer internet we all actually want.’




