Matthew Lesh

The Online Harms Bill still threatens free speech and privacy 

The Online Safety Bill became a lightning rod for criticism during the Conservative party leadership contest over the summer. A wide array of candidates, from Kemi Badenoch to Liz Truss and Rishi Sunak, promised to take another look at how the legislation, and its attempt to crack down on online harms, could interfere with free speech. 

The ‘legal but harmful’ duties, now being removed, required the largest digital companies to address state-determined categories of legal speech – like ‘disinformation’ or ‘hate speech’ – in their terms and conditions. But in practice ‘legal but harmful’ was never the most problematic part of the proposed legislation. While the impetus was for more user content to be removed, technically the platforms could have opted to do nothing. And regardless, the provision only applied to the largest companies. 

The most concerning part of the Bill, perhaps counterintuitively, relates to illegal content. The Bill requires all digital platforms, from Twitter to Mumsnet, to proactively scan user speech for a wide array of prioritised content and censor it using automated systems. Shockingly, this duty even applies to encrypted private messaging services like WhatsApp and iMessage. In the name of avoiding large fines – up to 10 per cent of any given platform’s global revenue – this could turn into censorship on an industrial scale.

The Bill presumes that the government can legislate for safety simply by handing off complex societal issues to Big Tech in the form of ‘duties’

Even worse, the removal of content is required on the whimsical basis of anything that the platform has ‘reasonable grounds to infer’ could be illegal. This includes inferring whether the ‘mental elements’ of an offence (e.g. intent to cause harm) are present and whether there is a lack of defence. This raises obvious challenges: how is a digital platform meant to read a user’s mind? Why is the threshold below the usual legal standard (i.e. beyond reasonable doubt)? Does this bake in censorship without legal process and rule of law protections?

In other words, ‘legal but harmful’ may have gone but Facebook’s Nick Clegg will still be expected to read your private messages and decide whether you have been naughty. 

Just take the new harmful communications offence – the replacement for the problematic section 127 of the Communications Act. The offence outlaws sending a message that causes serious psychological harm to a likely audience without a reasonable excuse. This provision is built on a dangerous notion: that ideas which cause negative feelings should be censored. It puts the core premise of ‘cancel culture’ directly into law. 

At least, as a criminal offence, people can retain their basic legal rights through this provision. The real problem is when you mix this new communications offence with the illegal content duties. The Online Harms Bill asks firms to remove anything they infer could cause serious psychological distress. This is a recipe for a heckler’s veto: encouraging the most easily offended to claim emotional harm and demand content removal. It enables trans-activists to demand the removal of terf speech (and vice versa), while Islamists can push for takedowns of Charlie Hebdo magazine covers.

That’s not all. The Bill will also introduce ‘age assurance’ requirements to prevent children from accessing content only meant for adults. This will not simply be an annoyance through content requests for driver’s licences and use of behavioural biometrics. It will also raise significant privacy issues.

The additional regulatory burdens in the Bill risk cementing the position of the largest tech companies while making the UK a less welcoming place for innovation. The Bill will require tens of thousands of companies to determine whether they are in scope and undertake extensive risk assessments. They will then have to comply with administrative obligations and implement systems to monitor their platforms. This will be particularly burdensome for smaller companies and start-ups. It could even lead overseas websites to block access for UK users rather than comply — as was the case after the EU introduced GDPR legislation. When considered together, it hardly feels like taking advantage of the opportunities of Brexit to be more welcoming of enterprise. 

The entire premise of the Online Safety Bill needs a serious rethink. The Bill presumes that the government can legislate for safety simply by handing off complex societal issues to Big Tech in the form of ‘duties’. In the process, it makes false promises to parents, whose kids will not be safer without better education and law enforcement. It also tries to solve almost every digital issue, and in doing so hands over extraordinary enforcement and discretionary powers to Ofcom. It will undoubtedly come back to bite Conservatives by empowering those who demand censorship. 

The Bill marks a movement from an era of a relatively free and open internet to one in which the grubby hands of state enforcement will, once again, try to dictate what we are allowed to see and believe. The government’s stated goal is to make the UK the safest place in the world to go online. But in the process of asking for some additional safety, we are sacrificing liberty. In the end, we risk getting neither.

Comments