The Online Safety Act 2023 – An Authoritarian Creep by Statute

Copyright © British Democratic Alliance 2025. All rights reserved.

The Online Safety Act 2023 presents itself as a legislative effort to protect children and vulnerable users from harm online. In truth, it is a sophisticated attempt by the British State to centralise control over information, extend censorship powers through regulation, and create a chilling environment for dissent. Here, we examine the Act section by section, exposing its potential for abuse, its violations of fundamental rights, and its capacity to undermine political opposition under the pretence of safety and digital hygiene.

Where a clause is merely administrative or procedural, it is noted but not analysed. Where clauses contain vague, subjective or coercive language, or permit powers that violate privacy, freedom of expression, and legal due process, they are critically examined in full.

Sections 1 to 3: Meaning of Regulated Services

The Act defines “regulated services” as user-to-user services and search services. This includes websites, social media, discussion forums, messaging apps, video platforms, and even basic blogs that permit comments.

Observation:
This expansive definition is intentionally vague. It sweeps up a vast swathe of the internet under the OFCOM regime. Even a self-hosted WordPress site with a comment box is technically within scope. This creates a situation where the state can potentially demand access, reporting, and compliance from private citizens acting in good faith. This violates the principle of proportionality embedded in the Human Rights Act.

Section 3(4): Secretary of State may amend definitions

Observation:
This clause gives the Secretary of State unilateral power to redefine what constitutes a regulated service through secondary legislation, without requiring a full parliamentary debate. This is deeply authoritarian. A future Government could quietly expand the scope to cover encrypted messaging, private cloud storage, or file transfer services.

Section 6: Territorial Scope

This section makes clear that any service accessible in the UK is within scope, even if hosted or operated elsewhere.

Observation:
This extraterritorial reach is unworkable and dangerous. It threatens foreign platforms with UK regulatory enforcement and implicitly forces UK compliant behaviour on the global web. More worryingly, it empowers the state to block or fine platforms for non-compliance, limiting access to information under a domestic legal framework with no international checks.

[Part 2: Duties of Service Providers]

Section 9: Duties to Protect Users from Illegal Content

This is the first major power. It imposes a duty on services to prevent users from encountering illegal content and to remove such content promptly upon becoming aware of it.

Observation:
The principle appears reasonable until one considers enforcement. Platforms will naturally err on the side of censorship, deleting anything potentially contentious to avoid state penalties. There is no requirement for intent, proportionality, or context. Satire, parody, journalism and political speech all risk suppression under this standard. The Act makes no allowance for extenuating or artistic value.

Section 11: Children’s Online Safety Duties

Services must assess risk and take proportionate steps to prevent children from accessing harmful content, including legal but potentially distressing material.

Observation:
The phrase “legal but harmful to children” is a gateway to content policing based on moral judgement, not law. This introduces a subjective, state defined threshold for acceptability. It also allows OFCOM to demand changes to algorithms, design, and moderation that affect all users, not just children, thereby compromising free expression for adults in the name of child protection. It is an open door to overreach by government agencies and the security services.

Section 12: Content that is harmful to children

This section does not define harm in concrete terms, instead referring to “material risk of significant harm”.

Observation:
This is classic legislative vagueness. It opens the door to policy driven censorship, enforced by OFCOM codes with little democratic oversight. Content that challenges mainstream narratives on gender, politics, history, or religion could be deemed harmful simply for provoking discomfort.

Section 20: False Communications Offence

Criminalises knowingly sending false information with intent to cause harm.

Observation:
This risks criminalising misinformation or unpopular opinion. In a climate of politicised science and contested narratives, proving intent is nearly impossible. This will not be used against the BBC, party media arms, or government ministers. It will be used against lone individuals, bloggers, critics and whistleblowers. The Act does not require the State to prove actual harm — the subjective “intent” is sufficient. Science discussions will also fall under this umbrella, and whilst posters and purveyors of knowingly and deliberately false information need to hold to account, this becomes a very grey area when education or science understanding is the controlling factor, criminalising people for their lack of education is abhorrent.

[Part 3: OFCOM’s Powers and Regulatory Control]

Section 36 to 43: Codes of Practice

OFCOM is empowered to issue binding codes of practice on risk assessments, content moderation, algorithms, and service design. While technically advisory, services must explain deviations from the code.

Observation:
This creates a de facto regulatory regime where OFCOM writes the rules and services comply under threat of reputational or financial damage. There is no public consultation mechanism for content moderation standards, and no democratic input on definitions of harm, risk, or acceptability. This hands moral authority over national discourse to an unelected quango.

Section 91: Use of Technology Notices

OFCOM can require service providers to use accredited technology to identify terrorism, CSAM (child sexual abuse material), or other illegal content.

Observation:
While CSAM detection is legitimate, the Act allows this to apply to private messaging services. End-to-end encryption may be rendered useless through forced client side scanning. OFCOM, a state actor, may require surveillance at scale on every personal message. This breaches Article 8 of the Human Rights Act (right to private life). There is a fine line between security and freedom – this crosses that line.

Section 98: Powers of Entry and Inspection

OFCOM officers can enter premises, demand access to documents, and inspect systems.

Observation:
The potential for abuse becomes even more concerning when these powers intersect with those granted under the Terrorism Act 2000. Under TACT, police officers may stop and search individuals or vehicles if they reasonably suspect involvement in terrorism. This includes authority to seize and retain mobile phones, laptops, and cameras, often without charge or subsequent prosecution.

When combined with Section 98 of the Online Safety Act, which permits OFCOM to enter premises and inspect private systems, the result is a murky legal grey zone where individuals may be subjected to regulatory scrutiny and criminal suspicion simultaneously, without judicial oversight or effective recourse. Even where devices are eventually returned and no offence is found, the intrusion and reputational damage are irreversible. This creates a chilling effect on investigative journalism, whistleblowing, political activism, and dissenting opinion, all of which are essential pillars of democratic society.

Section 123: Financial Penalties

Failure to comply with duties or codes can result in fines up to £18 million or 10% of global annual turnover.

Observation:
These extreme penalties will deter small platforms, academic services, independent hosting providers, and startups. Compliance is too burdensome, and potential liability is too high. As a result, public discourse will consolidate around compliant giants who work hand in glove with the State. The likes of Meta, Google and other global platform providers have the financial recourses for compliance, start-ups may not, they will likely be cut off at the ankle before they can get off the ground.

[Part 4: Exemptions and Disingenuous Protections]

Section 14: Duties to Protect Freedom of Expression

Platforms must consider the importance of freedom of expression when deciding how to meet duties under the Act.

Observation:
This clause is meaningless. It does not override any other duty, nor does it offer statutory protection for speech. It merely instructs providers to “consider” the importance of freedom of expression, without requiring them to uphold it. It offers no defence against over-censorship and cannot be relied upon to preserve open discussion.

In practice, any content a platform deems “questionable” will likely be removed to avoid regulatory penalties, regardless of its intent, accuracy, or merit. This hands the State indirect control over discourse by incentivising platforms to act pre-emptively.

Crucially, this stifles not only dissent but also public education. Misguided or controversial views that might otherwise be challenged and corrected through open dialogue are instead silenced, depriving society of the opportunity to educate, engage, and evolve.

Section 15: Duties about Journalistic Content

Some content “of journalistic importance” is protected from removal.

Observation:
The definition of journalism under this Act is narrow, vague, and establishment friendly. The legislation fails to clearly define who qualifies as a journalist, nor does it recognise modern, decentralised forms of journalism. As a result, bloggers, whistleblowers, and citizen reporters receive no protection.

Self-styled citizen journalists, often those who report on local corruption, political abuses, or social failings, are likely to find their content disproportionately moderated or removed, in what constitutes a clear breach of Article 10 of the Human Rights Act (freedom of expression and information).

Meanwhile, legacy media organisations are explicitly safeguarded, even when partisan or demonstrably inaccurate. This creates a two tiered system of information rights. Independent voices are exposed, while state-aligned or establishment narratives enjoy privileged protection. The result is a regulatory structure that facilitates state influence over public discourse, insulating official narratives from meaningful criticism.

Section 17: Recognised News Publishers

The Act exempts content from certain pre-approved news organisations.

Observation:
As observed with Section 15, this creates a two tier system. Mainstream press, even when partisan or inaccurate, enjoys protection from takedowns. Alternative media, opposition voices, or non-accredited publishers do not. This enshrines elite media supremacy in law.

Effectively, this gives the state the powers required to moderate any media publisher of any content it objects to, this risks being used to stifle press freedoms and hide over-sight and wrongdoing. The approved media will comply with OFCOM rules and as such, only publish “official” narratives to avoid being prosecuted for having a dissenting opinion.

What does this say about freedom in the UK, what does this say about us?

“A free press can, of course, be good or bad, but, most certainly without freedom, the press will never be anything but bad.” – Albert Camus

“The ability of the press to print their stories without the government trying to get them to betray their sources is as essential to a free press as the ink it is printed with. Otherwise, who will hold accountable those who hold power over us?” – Rod Lurie

Final Assessment

The Online Safety Act 2023 is not merely flawed. It is a legislative framework for authoritarian control, disguised as protection. Its clauses are so broadly defined and coercively enforced that every citizen who engages online becomes a potential target. Its mechanisms create a censorship culture. Its exemptions protect only the powerful. And its tools, once operational, can be turned against any political opponent, investigative journalist, or activist.

In a healthy democracy, the answer to harmful or false information is more speech, better information, and civic education. Not digital suppression.

This Act must be challenged, reformed, and ultimately replaced with a rights-based, constitutional approach to online safety, one that protects children without sacrificing liberty.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.