Online Harms Bill to punish online hate offenders up to life in prison

Online Harms bill

Introduction:

The federal government has introduced the Online Harms Bill, known as Bill C-63, aimed at addressing harmful online content and protecting children. This legislation is set to become a focal point in the ongoing debate over freedom of expression on the internet.

Arif Virani, Minister of Justice and Attorney General of Canada, introduced Bill C-63, the Online Harms Act. The Bill would create stronger online protection for children and better safeguard everyone in Canada from online hate and other types of harmful content. The Bill sets out a new vision for safer and more inclusive online participation. It would hold online platforms, including livestreaming and adult content services, accountable for the design choices made that lead to the dissemination and amplification of harmful content on their platforms and ensure that platforms are employing mitigation strategies that reduce a user’s exposure to harmful content.

Key Aspects of the Online Harms Bill:

The bill seeks to hold social media platforms, user-uploaded adult content platforms, and live-streaming services accountable for reducing exposure to harmful online content. It also aims to strengthen the reporting of child pornography, address hate propaganda, and provide recourse for victims of hate online.

Implications for Children and Communities:

Justice Minister Arif Virani emphasized the need for this bill to protect children from online dangers. He highlighted the rise in online sexual abuse, extortion, and exploitation, which have increased RCMP National Child Exploitation Crime Center reports by 1,077% in a decade. The bill mandates the removal of intimate content communicated without consent, such as “revenge porn,” and content that sexually victimizes children or re-victimizes survivors of child sexual abuse.

Establishment of Digital Safety Commission and Ombudsperson:

The Online Harms Bill proposes the creation of a Digital Safety Commission to enforce rules and hold online services accountable. Additionally, a Digital Safety Ombudsperson will support and advocate for users, making recommendations to social media services and the government.

Penalties and Exclusions:

Online services failing to comply with the rules could face severe penalties, including fines as high as six percent of global revenue or $10 million, whichever is greater. However, not all online services are covered, and private and encrypted messaging services are excluded.

Amendments to Criminal Code and Canadian Human Rights Act:

The bill introduces a new standalone hate crime offence with penalties up to life imprisonment to deter hateful conduct. It also raises the maximum punishments for hate propaganda offences to as high as life imprisonment. Amendments to the Canadian Human Rights Act specify that posting hate speech online is discrimination and provide a process for assessing hate speech complaints.

 

Debates and Controversies:

While the government asserts that the bill enhances free expression by empowering safe online participation, opposition parties, notably the Conservative Party, have criticized it as an attack on freedom of expression. Concerns have also been raised about potential censorship and the practicality of the proposed measures.

Quotes from the Parliament:

“I am the parent of two young boys. I will do whatever I can to ensure their digital world is as safe as the neighbourhood we live in. Children are vulnerable online. They need to be protected from online sexual exploitation, hate and cyberbullying. Now more than ever, especially given the evolving capabilities of AI, online platforms must take responsibility for addressing harmful content and creating a digital world where everyone can participate safely and freely. This legislation does just that.”

The Hon. Arif Virani, Minister of Justice and Attorney General of Canada

 

“As a parent, as someone who represents, as we all do, parents who speak to us all the time about these dangers and how they apprehend and how they watch their children sometimes go through horrible experiences online, I think any responsible government has to act,”

Government House leader Steven MacKinnon

Categories of Harmful Content

The Online Harms Act would specifically target seven categories of harmful content:

  • Content that sexually victimizes a child or revictimizes a survivor;
  • Intimate content communicated without consent;
  • Content that foments hatred;
  • Content that incites violent extremism or terrorism;
  • Content that incites violence;
  • Content used to bully a child; and
  • Content that induces a child to harm themselves.

Obligations related to these seven categories of harmful content would be organized under three duties:

v Duty to act responsibly;

v Duty to protect children

v Duty to make certain content inaccessible.

Duty to Act Responsibly

Services would be required to enhance the safety of Canadian children and adults on their platforms by reducing their risk of exposure to the seven types of harmful content. Services would be required to:

  1. Assess the risk of exposure to harmful content, adopt measures to reduce exposure to harmful content, and assess the effectiveness of those measures;
  2. Provide users with guidelines and tools to flag harmful content and to block other users. Services would also have to set up an internal point of contact for user guidance and complaints;
  3. Label harmful content when it is communicated in multiple instances or artificially amplified through automated communications by computer programs. This requirement would include harmful content shared widely by bots or bot networks, for example;
  4. File and publish Digital Safety Plans containing the measures the service is taking, the effectiveness of those measures, the indicators they use to assess effectiveness and any analysis of new risks or trends related to online safety. They would also need to identify the data sets they use and keep and provide those data sets to qualified researchers, when appropriate.

Duty to Protect Children

Services would have a statutory duty to protect children online. To make the digital world safer for kids, services would be required to implement design features, such as age-appropriate design features and to take the interests of children into account when designing products and features.

These requirements would be set out in regulations issued by the Digital Safety Commission and could include things like defaults for parental controls, default settings related to warning labels for children, or safe search settings for a service’s internal search function. They could also include design features to limit children’s exposure to harmful content, including explicit adult content, cyberbullying content and content that incites self-harm. Allowing the Digital Safety Commission to enact guidelines and regulations under this duty would allow the legislation to be adaptable and to grow over time as the landscape of harmful content affecting children changes.

Duty to make certain content inaccessible

This duty would require services to make two specific categories of harmful content inaccessible to their users:

(1) content that sexually victimizes a child or revictimizes a survivor, and

(2) intimate content posted without consent, including sexualized deepfakes. These two categories represent the most harmful content online and it only takes a single piece of content under these two categories to cause substantial and lasting harm. The only way to reduce the risk of exposure and to protect victims is to make the content inaccessible.

History of Online Harms bill:

The Online Harms bill originated from a 2019 mandate letter from PM, Justin Trudeau to then Heritage Minister Steven Guilbeault. The letter requested the creation of new regulations for social media platforms, starting with a requirement to remove illegal content.

During Guilbeault’s tenure, this directive led to two main actions. The first was the introduction of Bill C-36, which focused on hate speech and was tabled late in the previous Parliament. However, this bill did not progress further.

The second action occurred shortly before the 2021 election, when the government released a “technical discussion paper” outlining a proposed legislative framework to address five forms of harmful online content.

This initial proposal included ideas such as a 24-hour takedown requirement for harmful content, transparency from platforms on their algorithms, and a new appeals system for content moderation decisions.

After facing criticism for this proposal, the Liberals promised during the 2021 campaign to introduce a “balanced and targeted” online harms bill within 100 days of the election. Pablo Rodriguez took over the portfolio after the election and embarked on a reworking of the bill.

This process involved consulting a panel of experts in platform governance, content regulation, civil liberties, tech regulation, and national security. Rodriguez and his team also conducted panel discussions across the country with stakeholders and minority groups.

While there was optimism that the bill would be ready by early 2023, as of now, no legislation has been introduced. Experts who assisted in crafting the bill have expressed frustration, noting the lack of protections for Canadian children compared to other countries with similar strict laws.”

Conclusion:

 

The Online Harms Bill represents a significant step towards making the internet safer for children and addressing harmful online content. While debates over freedom of expression are expected to continue, the need to protect vulnerable users and communities from online harms is a pressing concern that requires careful consideration and balanced approaches.

For more such articles please visit ExpressBlogz

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top