Tom Slater Tom Slater

Ofcom can’t be trusted to censor social media

It’s boom time at Ofcom. In the past few years, what was until recently the government-backed regulator for broadcasting, telecoms and postal industries (already an absurdly broad range of responsibilities) has seen its remit expanded beyond all recognition. Following the passage of the Online Safety Act 2023, Ofcom has been handed the famously straightforward task of regulating social-media companies – compelling them to clamp down on illegal speech and activity on their platforms. The Media Act 2024, which gained royal assent in May, has extended its reach to streaming services, too. Now, a think-tank has essentially suggested we should cut out the middleman and turn the Office of Communications into a full-blown Ministry of Truth.

Ofcom’s new responsibilities under the Online Safety Act are already a major threat to free speech online

This is the call from the Centre for Countering Digital Hate (CCDH) for Ofcom to be better equipped to ‘fight misinformation and deamplify harmful posts to prevent public disorder’. Among other proposals, apparently aimed at preventing a repeat of the recent race riots, the CCDH thinks that Ofcom, in times of crisis, should be able to apply to a judge for ‘emergency powers’, allowing it to demand immediate action on ‘harmful’ content and ‘misinformation’ posted on social-media platforms. Reportedly, this could be achieved by tweaking the ‘special circumstances’ directive in the Online Safety Act, which enables the science, innovation and technology minister to issue a direction to Ofcom during a crisis of national security or public health.

What could possibly go wrong? Quite a lot, obviously. We’ve seen from Ofcom’s never-ending crusade against GB News that it struggles with being impartial even within its existing areas of responsibility. And yet now it is being asked to enforce the rules around what ordinary people can and can’t say on social media. Last year, it had to suspend its new ‘online safety supervision director’ – charged with ensuring social-media firms obey the new censorship regime – after she allegedly liked an Instagram post accusing Israel of ‘ethnic cleansing and genocide of Palestinians’. Which hardly inspired confidence, given that Israelophobic take is itself a bit of misinformation. 

Ofcom’s new responsibilities under the Online Safety Act are already a major threat to free speech online. The Act’s insistence that platforms proactively seek out and remove illegal content, or else face Ofcom’s fines and sanctions, is an incentive to censor first and ask questions later. Quite aside from whether or not we should have so many legal restrictions on speech as it is, this law will mean algorithms and overworked moderators removing all manner of perfectly legal speech, for fear of falling foul of their platform’s new duties. Ofcom’s final codes of practice for social-media firms haven’t even been published yet. That there are already calls for it to have greater powers suggests this is more about a rush to censorship than it is a sober reflection on how best to combat fake news and hate.

Because when you think about it for longer than five seconds, empowering a state-backed regulator to demand that Big Tech immediately clamp down on certain forms of speech is a recipe for authoritarianism. It certainly panned out that way during the pandemic, when social-media firms worked hand-in-glove with governments and public-health bodies to suppress certain statements and opinions about Covid. Time and again, people were censored not just for spewing obvious falsehoods, which would be bad enough, but also for opposing particular policies, or saying something that didn’t quite jive with the current public-health ‘consensus’. Censorship was, in effect, outsourced to the private sector and totally legitimate, legal statements were silenced as a consequence. Indeed, in the space of a year or two, the lab-leak theory – positing that Covid originated in a Chinese laboratory – went from being a racist conspiracy theory, wilfully clamped down upon by Facebook et al, to a plausible explanation endorsed by various US government agencies.

If we are truly concerned about tackling misinformation, or indeed hate, the last thing we should do is empower a state-backed body to define and censor it. Call me an old cynic, but when it comes to misleading the public, governments (and their ‘independent’ regulators) are repeat – and far more consequential – offenders. 

Comments