The new proposal obliges the platforms to ‘take appropriate steps to identify and mitigate potential harm arising out of the operation and design of their services’.

article material
The federal Liberal government is planning to shift gears on its controversial proposal to regulate “online harm” that puts it on digital platforms to tackle potentially harmful content. The move comes after critics warned that the original plan would amount to censorship, and new documents released from a government-appointed advisory group show it backed a change of approach.
Ad 2
article material
However, most “if not all” members of the Heritage Canada-appointed advisory group have suggested that the categories of targeted harm be broadened to include, among other things, “misleading political communication,” “propaganda,” and online content. should be done. An “unrealistic body image.” The government has not yet indicated whether it will accept all the recommendations of the group.
A series of worksheets recently posted online by Heritage Canada indicate that the government is moving away from the original plan for a “governance based on stricter moderating obligations,” which Ottawa has deemed harmful within 24 hours or in the face. Platforms would have been ordered to remove the content. a punishment.
Instead, an “updated approach” will focus on a “common framework that forces platforms to assess the risk posed by harmful content on their services and provide details about how they address the identified risk.” and respond to instances of harm online on our platform.”
Ad 3
article material
The government’s first attempt to regulate the content was widely criticized in a consultation held last year. Internet experts, academics, Google, civil liberties groups and research librarians cautioned that the proposed plan would result in the blocking of legitimate content and censorship, and violate Canada’s constitutional and privacy rights.
-
Online damage bill proposed by telcos, tech companies criticized
-
Liberals’ strange obsession with censoring the Internet
Five categories of content would be covered under the government’s core regulatory scheme: terroristic content, material inciting violence, hate speech, non-consensually shared intimate images and child sexual abuse. In addition to complying with government takedown orders, platforms will also need to continuously monitor positions.
Ad 4
article material
In February, the government said it would revise the proposal after significant backlash, and in March, Heritage Minister Pablo Rodriguez appointed an “expert advisory group” to advise on redesigning the law.
The group of 12 completed their meetings on 10 June. Canadian Heritage said in a press release this week a final summary of the group’s findings and findings will be published in the coming weeks.
The government has published summaries of the group’s weekly meetings, as well as worksheets that outline the government’s “preliminary ideas” for updating the proposed legislation.
The new approach would regulate the same five categories of content currently in place and cover “services that Canadians instinctively engage with on social media platforms” – naming in particular Facebook, YouTube, Instagram, Twitter and TikTok – as well as which “pose a significant risk” in the context of the dissemination of harmful material, such as the porn site Pornhub.
Ad 5
article material
Messages sent using the private messaging function of a platform such as Facebook Messenger will not be captured.
A new regulator called the Digital Security Commissioner will implement the framework, with the ability to issue orders and impose fines, and will be equipped with “audit and inspection officers.”
The new proposal is to adopt a “duty of care” approach, obliging the platform to “take appropriate steps to identify and mitigate potential harm arising from the operation and design of their services”.
This means that platforms must file digital protection plans with the regulator, which will require them to perform a “risk assessment of harmful content on their platforms, and detail their mitigation measures, systems and procedures to address such risks.” will be,” the government mentioned.
Ad 6
article material
According to the worksheet released by the government, “the governance will set baseline standards to define harmful content and that, in turn, will be monitored and operated by regulated services.” The idea is that as long as platforms have adequate systems in place, they “will not be penalized for reaching a reasonable conclusion about whether the content meets the legislative definitions of harmful content.”
At an April meeting of 12 consultants, most expressed support for “moving from a ‘take-down’ approach to content regulation”, and “encouraging platforms to manage risk while developing their products”. relocating instead of doing,” said a summary published by Canadian Heritage.
The new approach is similar to the approach put forward by the UK government in its Online Security Bill. One of the benefits of the systems-based approach, the Heritage Worksheet said, is that it seeks to “reduce limits on freedom of expression within reasonable limits and by procedural fairness and safeguards.”
Ad 7
article material
The summary of the April 21 meeting said that many “experts stressed that whatever framework is chosen, it will be critically important not to encourage a generic system of monitoring.”
Some also expressed concern about “duty outsourcing to consider the fundamental rights of private companies”, particularly in Canada, “as, in their view, Canada does not have a clear sense of freedom of expression.” “
He said it would be “particularly important to be as clear in law as possible what is expected of regulated services considering the fundamental rights and freedoms of their users.”
Many also “emphasized that the Charter would have concerns with a framework that seeks to impose obligations on the Services to remove unlawful content.”
Ad 8
article material
But at the same time, most, “if not all” told the government that the scope of the law should be expanded.
In addition to the five categories of content proposed by the government, he said the framework should cover a range of both illegal and legal but potentially harmful content, including fraud, cyberbullying, defamation, “propaganda,” “misleading political communication.” and “mass sharing of traumatic events.”
They also suggested targeting materials and algorithms that contribute to “unrealistic body image” and “alienation or reduced memory concentration and ability to concentrate.” The government also consulted experts on how it can remove the propaganda.
Those different types of content don’t necessarily have to be treated the same. “Many experts have recommended that the framework differentiate between illegal and legal but harmful content, imposing different obligations on regulated services for each type of content,” the summary said.