The Canadian government is currently holding consultations on a new online hate bill. This bill will update Bill C-36, which addresses hate propaganda, hate crimes and hate speech; the amendment died after the election call last year.
Hate propagated on social media and other online spaces has grown exponentially over the past few years, driven to a significant extent by the COVID-19 pandemic.
The occupation of Ottawa earlier this year by the so-called “freedom convoy” also exposed an increasingly worrying relationship between online and offline environments.
Read more: Near home: The Canadian far right, COVID-19 and social media
To make matters worse
It is difficult to argue against the motivations for the proposed anti-hate bill. At the same time, the discourse surrounding the proposed bill is rapidly gaining momentum. There are serious concerns about the scope and unexamined assumptions of the bill that will lead to legislation that is too broad and unmanageable.
While the perceived need to “do something” about hate speech is understandable, the bill runs the real risk of making things significantly worse.
First, there is danger in the use of euphemisms such as “de-platforming” and “content moderation”, which bypass difficult conversations about censorship. We need to be honest about the fact that we’re talking about censorship.
Rather than getting caught up in more philosophical concerns, we should rather worry about practical consequences. Specifically, the very real likelihood that attempts to silence certain voices will only succeed in exacerbating the issues we are trying to address.
We must be wary of the law of unintended consequences, which addresses the unforeseen outcomes of legislation and policies.
Open silence will only serve to substantiate fundamental far-right narratives, which include: “The government is out to get us” and “Our ideas are so dangerous, the government must suppress them.” This in turn inspires and perpetuates the movement further.
These efforts also expose the inherent hypocrisy of censorship, which is that it is not censorship if enough people disapprove of the intended target. The far right will seize this sentiment and present it as further confirmation, and will use it to strengthen their calls for fundamental social change.
We must avoid feeding these narratives.
Secondly, consideration must be given to the vulnerable groups that are mostly the targets of hate speech. It has been argued, and quite correctly, that specific communities – including visible minorities, indigenous and LGBTQIA2S + people, immigrants and refugees – are unduly disadvantaged by, and deserve to be protected from, far-reaching insults. Unfortunately, the potential dangers to these people have received insufficient attention through the new bill.
Members of vulnerable communities have expressed concern that the provisions of the bill could be used to restrict their online freedoms. These fears are in fact grounded, as they have historically been excessively targeted for control by law enforcement. The thorny gap between best plans on the one hand, and the realities of implementation and enforcement on the other, brings us back to the law of unintended consequences.
Third, much of the discussion surrounding the bill makes unrealistic assumptions about the capabilities of the technology companies that run social media platforms. Contrary to popular belief, large-scale technology does not have the ability to easily identify and remove specific content. Relying on purely technological solutions greatly underestimates – and betrays a worrying lack of understanding regarding – the difficulties of moderating language.
Much research, including work done by one of the authors (Garth) with criminologists Richard Frank and Ryan Scrivens, has revealed that the far-right ecosystem is characterized by an essentially clear, coded language that is constantly evolving. This work similarly highlighted the challenges of trying to identify specifically violent language.
Apart from the fact that they do not want it, we should be reluctant to transfer editorial control to private corporations. So far, their efforts have been controlled and can best be described as suspicious. Any belief that it can be addressed by an overarching legislative framework is sadly misplaced.
This is not an argument for a social media “free-for-all”. It has long been clear that the all-going ethos underlying the earliest incarnations of the internet, both comic and tragic, did not predict the toxic swamp it became. Certain online content should (and in most cases already are) banned, including threatening and promoting violence.
But when it comes to trying to limit content could leading to violence, we find ourselves standing on much thinner ice. Of course, legislation has a role to play. And yes, technology companies need to be part of the discourse aimed at finding solutions.
However, as the past 20 years have shown, we can not kill or arrest our way out of violent extremism, nor can we moderate or de-platform our way out of it. Hate speech is a social problem that requires social reactions. In the meantime, we must guard against unintended consequences of attempts to address hate speech online and refrain from feeding far-reaching narratives.