One of Rishi Sunak’s pledges was to remove the ‘legal, but harmful’ censorship clause that Boris Johnson was poised to bring in via the Online Safety Bill. A few weeks ago it was said that he had done so and I wrote a piece congratulating him. I may have spoken too soon. The Bill as published would actually introduce (rather than abolish) censorship of the written word – ending a centuries-old British tradition of liberty. The censorship mechanism is intended for under-18s – an improvement on the original, draconian plan. But it still raises problems that I doubt have been properly discussed in Whitehall given the bias amongst officials desperate to get this Bill through.
The problem with censorship is always – always – the unintended and unimagined consequences
Films and magazines have long been subject to age censorship. The intentions of this Bill are fine: to protect children from indefensible content. But the greatest mistake in politics is to judge a scheme by its intentions, rather than its effects. The moment that a censorship mechanism is introduced for the written word, you risk opening a Pandora’s box of unintended consequences. If the government is pretending to have abolished legal-but-harmful, it might not really think about or scrutinise the potential effects of a censorship-by-proxy regime whose very existence it is reluctant to admit. Worse, it’s pretending reliable tech exists to censor content for kids only. It doesn’t. So as things stand, Sunak’s Bill is based on a false premise.
As the world’s oldest weekly, The Spectator has campaigned for free speech several times over its lifetime. Each time our premise has been simple: speech should be legal or illegal but there should be nothing in between. The moment you create a third category – of censorable speech – then you create a tool where the consequences are hard to envisage or control. In the digital world, where censorship bots and algorithms are making tens of thousands of decisions a day, the potential for unintended consequences rises exponentially. And far beyond the understanding of Whitehall officials, especially if the political pressure they face is pro-censorship and anti-Big Tech.
In the case of the revised Online Harms Bill, we need to ask:
- How are tech companies supposed to distinguish between adults and kids?
- If they are expected to verify, does this mean the end of anonymous accounts?
- If they are expected to guess, how likely are they to guess right? And distinguish between an 18-year-old and an 19-year-old?
- What risk is there of adults being subjected to a censorship regime intended for kids? If the government’s case rests on the power of age-guessing algorithms, where is the evidence of their accuracy?
- Will the state turn a blind eye to the resulting censorship? If online firms have to list all the content they target, we can all see it’s being done right. But if there is no such obligation, we will never know what censorship regime is in operation.
- What speech is being censored? The government promises a list, but has not published one. Will MPs be asked to vote for censorship, but only told later what’s being censored?
- Michelle Donelan, the Culture Secretary, has just been on the radio saying fines will be crushing (‘we’re talking tens of billions of pounds’) for Big Tech firms who violate the censorship clause. Given that such firms make virtually no money from current affairs and political discussion (they’d drop it all if they could as it represents a massive regulatory risk and negligible cash), won’t they just let the censorship bots rip on all content – rather than risk ‘billions’ on the accuracy of age-targeting algorithms?
- If online firms can’t know user ages for sure, and are on the hook for billions if they mess up, surely they will err on the side of content removal?
It’s true that some of the worst parts of the original Bill have gone. In the old Bill, the regulator or Secretary of State could change what was being censored. As things stand, there will be a list that only parliament can add to – which will make it harder to put Jimmy Carr jokes on the list of verboten content (as Nadine Dorries had suggested). The odds are – although we don’t know – that the banned list will include things like suicide encouragement (which is already illegal) so there is less chance of collateral damage. But there is still a substantial risk of this. As an editor, I see this every day: the way online firms apply their own invisible and new forms of censorship, usually targeting minority ideas which bots identify as suspect.
The problem with censorship is always – always – the unintended and unimagined consequences. Blair introduced hate-speech laws to outline racism, but ended up being himself investigated for allegedly being rude about the Welsh. As he was being interviewed by North Wales Police, he perhaps thought ‘hmm: this is an unforeseen, unintended consequence of my hate-speech bill’. This encapsulates the problem with censorship: things get out of control very quickly. Generations of politicians have concluded that, in which case, it’s best not to censor the written word at all: not for kids, not for adults, not for anyone.
If Rishi Sunak is now going to change this and introduce censorship, he’d take us into dangerous waters. And this Bill contains nothing, absolutely nothing, to monitor the censorship that it would introduce. The Bill rightly outlaws more of the material involved in the tragic death of Molly Russell: that’s how to do it. Make speech legal, or illegal. But the traditions of this country in having no censorship for the written word can, even now, be defended by removing ‘legal but harmful’ in its entirety.
Comments