The UK Online Safety Act: Protecting Kids Online or Overreaching?

Author avatar

Carolanne Bamford-Beattie

|

Online safety act

Can governments really legislate their way to a safer internet?

In late 2023, the UK government passed one of the most ambitious pieces of digital legislation in recent history: the Online Safety Act. Touted as a landmark in protecting children from harmful online content, the law has been years in the making and has sparked heated debate both in Parliament and across social media.

For parents, the promise of stronger kids online safety rules feels reassuring. After all, the digital world is where our children learn, play, and socialize. But critics argue that the UK Online Safety Act 2025 is a double-edged sword. Some believe it doesn’t go far enough to address the most dangerous corners of the internet, while others worry it goes too far in restricting free expression and privacy.

So, what does the law actually mean for families, and why are some people calling for the government to repeal the Online Safety Act before it has even taken full effect?

What Is the Online Safety Act?

The Online Safety Bill UK, now law, introduces new UK internet laws designed to regulate social media companies, search engines, and messaging apps. Platforms like TikTok, Instagram, and YouTube now have legal duties to:

  • Remove illegal content quickly, such as terrorism or child sexual abuse material.

  • Protect children from harmful but not illegal material, like cyberbullying, self-harm content, or extreme violence.

  • Introduce stricter age verification to prevent under-13s from joining platforms designed for older users.

  • Provide reporting systems so harmful content can be flagged more easily.

  • Comply with enforcement by Ofcom, which has the power to fine companies that fall short.

In theory, this should create a safer online environment for children. In practice, the law raises big questions about how far governments should go in controlling what we see and do online.

How the Online Safety Act Is Being Implemented

The UK Online Safety Act 2025 is being rolled out in phases.

New criminal offences like cyberflashing, intimate image abuse, and encouraging self-harm came into force in January 2024. In early 2025, duties around illegal content and pornography were introduced, alongside requirements for services to carry out risk assessments. By July 2025, platforms likely to be accessed by children must complete their children’s risk assessments, with Ofcom overseeing compliance.

Later in 2025, Ofcom will publish its register of categorised services, setting out which platforms face stricter accountability and transparency rules. Further phases are planned into 2026, making the Act a long-term shift rather than a quick fix.

Why Some Say the UK’s Online Safety Act Doesn’t Go Far Enough

Supporters of tougher UK social media laws argue the Act is still too weak in several areas:

  • Children remain exposed to harmful content.
    Even with new duties, platforms are still largely in control of how they moderate harmful material. Critics say the law doesn’t do enough to force companies to redesign algorithms that push dangerous or addictive content.
  • Loopholes for smaller platforms.
    While big tech companies face heavy scrutiny, smaller sites and private messaging apps may slip through the cracks. Parents know kids often find their way to less mainstream platforms, where harmful content is even less regulated.
  • Slow enforcement.
    The law gives Ofcom new powers, but parents worry action will be too slow. Harmful content spreads in seconds; enforcement that takes weeks or months won’t feel like meaningful protection.

VPNs and the Limits of Age Checks

While the UK Online Safety Act has introduced age verification for pornography and adult content, these systems are far from watertight. Many young people already know that VPNs can disguise their location or identity online. That means age checks, as they currently stand, can often be bypassed with ease.

This is where critics argue the Act doesn’t go far enough. By focusing on technical barriers without addressing how easily they can be undermined, the law risks giving parents a false sense of security. Unless enforcement and technology evolve to keep pace with how kids actually use the internet, the protections promised by the Act will remain patchy at best.

Why Others Say It Goes Too Far

On the other side of the debate, digital rights groups, academics, and even some parents fear overreach. Their main arguments include:

  • Privacy concerns.
    The Act could push companies to scan private messages to detect harmful content, raising fears of mass surveillance. Campaigners argue this undermines the right to private communication.
  • Free speech worries.
    What counts as “harmful but legal” is subjective. Critics fear platforms may take a “better safe than sorry” approach and over-remove content, stifling debate. Discussions around mental health or politics could be unfairly flagged.

  • Technical challenges.
    UK Online Safety Act age verification rules sound simple on paper, but in practice, proving your age online often means handing over sensitive ID or biometric data. Parents worry about the risks if this information is hacked or misused.

Some groups have even launched a UK Online Safety Act petition to push for changes or an outright online safety act repeal. At the time of writing, petitions to government repeal Online Safety Act measures have attracted hundreds of thousands of signatures.

Age Verification: A Sticking Point

One of the most controversial aspects of the law is how UK Online Safety Act age verification will be applied. From 2025, social media companies must take “proportionate measures” to stop underage users signing up.

That might mean stricter ID checks, credit card verification, or even biometric scans. For parents, stopping young kids from lying about their age online sounds positive. But campaigners warn it could:

  • Exclude children without access to formal ID.

  • Create privacy risks if sensitive data is hacked.

  • Place a heavy burden on families to navigate complex verification systems.

How This Is Already Impacting Online Behaviour

One of the clearest early impacts of the UK Online Safety Act 2025 has been on how people access pornography. From January 2025, services that publish adult content are required to introduce strict age verification and “children’s access assessments.”

This has already changed behaviour: many adult sites now ask for ID checks or third-party verification before allowing access. For some, this is a long-overdue step in protecting children from harmful material. For others, it raises serious privacy worries about how sensitive data is stored and whether users will be tracked.

Debates around the Act have spilled into mainstream forums, with threads on UK Online Safety Act Reddit and even Wikipedia Online Safety Act entries reflecting frustration, confusion, and in some cases, people actively looking for ways around the new barriers.

A Work in Progress

The UK Online Safety Act is one of the most far-reaching attempts yet to regulate the digital world. It reflects real concerns about the dangers children face online, from harmful content to addictive algorithms. But like many UK new internet laws, it risks unintended consequences: censorship, privacy issues, and surveillance.

As implementation continues through 2025 and beyond, arguments for reform, and calls to repeal the Online Safety Act altogether, will persist. For parents, the key lesson is that no piece of legisla