Tory online safety plans risk same mistakes as SNP Hate Crime Bill

As the SNP’s Hate Crime Bill crashed through Holyrood last year, its loudest critics were the Scottish Tories. To their credit, conservative MSPs were the only politicians to oppose censorious ‘stirring up hatred’ offences at every turn. They listened to the warnings of lawyers, academics, journalists, and campaigners about the potential these vague provisions had to undermine free expression. And they sought to amend the legislation to include firmer protections for free speech, including in the home.

Given this firm defence of free expression north of the border, you might expect the Conservative administration in London to take a similar, strong line against legislative curbs on speech. However, aspects of online safety proposals introduced by the UK Government look every bit as illiberal as the Scottish ‘stirring up hatred’ offences. Ironically, the UK Government has set itself on the same collision course with civil society as the Scottish Government did with its censorious hate crime law. Lawyers, journalists and campaigners are already lining up for a fight.

Of particular concern are plans to criminalise online content by citizens deemed “likely” to result in “psychological harm”. New offences, proposed by the Law Commission and accepted by UK Ministers this month, would focus on the “harmful” effect of statements made online, rather than their being “indecent” or “grossly offensive”, as under current laws. The vague terminology is alarming and eerily similar to the SNP’s hate crime offences as first proposed.

The Hate Crime Bill sought to criminalise speech and writing deemed “likely” to “stir up hatred” against different groups. Critics immediately complained that this wording was: one, hopelessly vague as ‘hatred’ wasn’t defined; and two, out-of-step with other aspects of criminal law, as no ‘intent’ was required on the part of an offender. Mere ‘likelihood’ of causing ‘hate’ was caught. The ‘psychological harm’ offences pose exactly the same problems. No definition has been provided for the subjective term ‘harm’, and ‘behaviour that is merely ‘likely’ to cause ‘harm’ would be caught. The threshold for offending is potentially very low.

Vague speech laws are a recipe for disaster. In Scotland, lawyers and police officers warn that the ‘stirring up’ offences could be weaponised by cancel culture bullies who wish to subdue their critics. For example, activists who consider even the mildest criticism of trans ideology to be hatred against trans people could report statements to the police and allege hatred has been ‘stirred up’. Even if no prosecution occurs, the owner of said criticism would be subjected to a stressful police investigation, with all the negative implications that brings for their family life and employment.

With the emphasis for the UK Government’s ‘psychological harm’ offences resting on the feelings of alleged victims, activists could similarly misuse them. A Tweet stating that “women are adult human females”, for example, could be reported and investigated on the grounds that it causes “likely psychological harm” to a man who identifies as a trans woman. The approach inherent in these offences fits neatly into the ‘cancel culture’ playbook. UK Ministers must not open the door to this kind of censorship, as their Scottish counterparts have. 

Other aspects of the online safety regime are similarly problematic. The bill would empower state broadcast regulator Ofcom to punish social media companies and search engines that host “lawful but still harmful” content, including “misinformation”. It isn’t at all clear what any of this means and critics have questioned whether empowering a state regulator to define “misinformation” is even appropriate in a democratic society. The Free Speech Union (FSU) states:

“Surely, if speech isn’t prohibited offline, it shouldn’t be prohibited online? In a democratic society, citizens should be free to make up their own minds about whether they trust what they’re reading. Asking a state regulator to decide what information is trustworthy comes with a variety of risks, not the least of which is that it will sometimes get it wrong, or deliberately suppress accurate information at the behest of powerful political forces.”

Provisions in the bill also seek to curb “harmful content” that could have a “significant adverse…psychological impact on an adult of ordinary sensibilities”. This odd phrase “ordinary sensibilities” isn’t defined. The FSU asks whether it will see companies having to “remove content that a majority of people find upsetting or offensive?” adding: “Isn’t the point of a good deal of free debate precisely to ‘impact’ people’s ordinary sensibilities?”

These aspects of the online safety plans are as clear as mud, and there is no room for imprecision in laws concerning the most fundamental right in democratic societies – the right to free expression. Ministers should think again.

Leave a Reply