Government Proposal to Fight “Online Harms” Presents Dangers of its Own

By Tim McSorley

Over the past two decades, many of us have come to rely on online platforms for basic necessities, communication, education and entertainment. Online, we see the good – access to otherwise hard to find information, connecting with loved ones – and the bad. It often combines the harms we know so well, including hate speech, racism, misogyny, homophobia, transphobia, the sexual exploitation of minors, bullying and incitement to violence, with new forms of harassment and abuse that can happen at a much larger scale, and with new ways to distribute harmful and illegal content.

Many social media sites have committed to addressing these harms. But business models that focus on engagement and retention – regardless of the content – have proven ineffective at doing so, with some studies showing that it is in their business interest to continue feeding the most controversial content. When these online platforms do remove content, researchers have documented that it is often those very communities that face harassment that face the most censorship. Governments around the world have also used the excuse of combating hate speech and online harms – such as “terrorist content” – to enact censorship and silence opponents, including human rights defenders.

The Canadian government had been promising to address this issue since 2019, framing it explicitly around fighting “online hate.” The government eventually released its proposal to tackle online harms in late July 2021, alongside a public consultation. There were immediate concerns with the consultation taking place in the dead of summer with an imminent election on the horizon. When the election was called a few weeks later, round tables with government officials who could answer questions about the proposal were canceled.

While the government’s approach was bad, the proposal itself was worse. As cyber policy researcher Daphne Keller described it, Canada’s original proposal was “like a list of the worst ideas around the world – the ones human rights groups… have been fighting in the EU, India, Australia, Singapore, Indonesia, and elsewhere.”

ICLMG’s central concern with the government’s approach has been around the inclusion of “terrorist content.” Since 2001, we have seen how the enforcement of anti-terrorism laws has led to the violation of human rights, especially because its definition can be twisted to suit political ends. Yet under the government’s initial proposal, social media companies would have been expected to identify “terrorist” content through mass surveillance, act on any content reported by users within 24 hours or face penalties up to millions of dollars, and required to automatically share information with law enforcement and national security agencies, both privatizing and expanding the surveillance and criminalization of internet users. The proposal even put forward new warrant powers for CSIS that would go far beyond addressing “online harms.” It was a recipe for racial and political profiling, particularly of Muslims, Indigenous people and other people of color, and for the violation of their rights and freedoms.

In February 2022, the Ministry of Heritage released a “What We Heard” report in which they recognized many of the valid concerns with the government’s approach. They announced a new consultation process led by an expert advisory group that would review these concerns and propose advice on what the government’s approach should be.

Various groups, including the ICLMG, continued working together to respond to the government’s proposals and to develop ideas on how best to fight online harms. We published op-eds and met with government officials and MPs. In March 2023, we helped draft a group position document on core guiding principles for any future legislation, including “red lines,” that was sent to the Minister of Heritage and shared with opposition critics.

Nearly two years after sharing its initial proposal, in late March 2024, the government introduced Bill C-63 to create the Online Harms Act. The bill has proven controversial in large part because it also seeks to amend the Criminal Code and the Canadian Human Rights Act in ways that raise civil liberties and human rights concerns.

Specifically in regards to online harms, though, the analysis and advocacy of the ICLMG and others has resulted in a much better bill than would have been expected in 2021. In particular:

  • While still including seven different categories of harms, it no longer proposes a simple “one-size fits all” approach.
  • There is no explicit requirement that would require platforms to monitor all content in order to identify and remove harmful posts.
  • The main focus is on the regulation of platforms, in the form of obligations to create and follow online safety plans, and not on policing all users.
  • Except for content that sexually victimizes a child, there is no requirement for mandatory reporting of content or users to the RCMP or CSIS.
  • There are no proposals to create new CSIS warrant powers.
    There are greater rules around platform accountability, transparency and reporting.

However, there remain serious areas of concern:

  • The proposed category of “content that incites violent extremism or terrorism” is, by its nature, overly broad and vague.
  • Given there is a nearly identical, and more specific, harm of “content that incites violence,” a terrorism-focused harm is unnecessary and redundant.
  • While not explicitly requiring platforms to proactively monitor content, it does not disallow such actions either.
  • Platforms would be required to preserve data relating to posts alleged to incite violence, violent extremism or terrorism for one year, so that it is available to law enforcement if needed for an investigation.
  • The proposed Digital Safety Commission, which would enforce the rules under the Online Harms Act, is granted incredibly broad powers with minimal oversight.
  • A lack of clarity around hearings and investigations could allow for malicious accusations of posting “terrorist content,” and uncertainty around recourse for those whose content is erroneously taken down by platforms.

This is clearly a complex problem, and it is easier to point out flaws than to develop concrete solutions. What appears clear, though, is that empowering private online platforms to carry out greater surveillance and content removal not only fails to address the heart of the issue, but creates more harm. Instead, governments must invest in offline solutions combatting the roots of racism, misogyny, bigotry and hatred. Just as importantly, governments must address the business models of social media platforms that profit from surveillance and use content that causes outrage and division as a way to drive engagement and to retain audiences. So long as there is profit to be made from fuelling these harms, we will never truly address them.


Tim McSorley is the National Coordinator of the International Civil Liberties Monitoring Group

Back to table of contents

Since you’re here…

… we have a small favour to ask. Here at ICLMG, we are working very hard to protect and promote human rights and civil liberties in the context of the so-called “war on terror” in Canada. We do not receive any financial support from any federal, provincial or municipal governments or political parties. You can become our patron on Patreon and get rewards in exchange for your support. You can give as little as $1/month (that’s only $12/year!) and you can unsubscribe at any time. Any donations will go a long way to support our work.panel-54141172-image-6fa93d06d6081076-320-320You can also make a one-time donation or donate monthly via Paypal by clicking on the button below. On the fence about giving? Check out our Achievements and Gains since we were created in 2002. Thank you for your generosity!
make-a-donation-button