AI and Canada's proposed Online Harms Act

Canada wants to hold AI companies accountable with proposed legislation | GZERO AI

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.

So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.

It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.

Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.

So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.

Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.

I'm Taylor Owen and thanks for watching.

More from GZERO Media

People gather outside the National Assembly after South Korean President Yoon Suk Yeol declared martial law, in Seoul, South Korea, on Dec. 4, 2024.
REUTERS/Kim Soo-hyeon

In an unexpected, late-night speech on Tuesday, South Korean President Yoon Suk Yeol declared martial law, banning all political activity, taking control of all media, and suspending parliament. For all of a few hours, it turned out.

- YouTube

Ian Bremmer's Quick Take: President Yoon Suk Yeol of South Korea deciding to suddenly declare emergency martial law, announcing together with the military all political activities prohibited. All media now under state control. No strikes, demonstrations allowed. Ian Bremmer breaks down the reason for this decision in this Quick Take.

Proud Source Water became a Walmart supplier in 2021. Today, their team has grown 50%, and they're the largest employer in Mackay, ID. When local suppliers work with Walmart, their business can grow. In fact, two-thirds of Walmart's product spend is on products made, grown, or assembled in America. By working with Walmart, local businesses like Proud Source Water can reach more customers, hire more people, and help their communities thrive. Explore the positive impact of Walmart's $350 billion investment in US manufacturing.

Supporters of Hamas wave their green flags during a celebration marking the 35th anniversary of the founding of Hamas in Gaza City in December 2022.
Yousef Masoud / SOPA Images/Sipa USA via Reuters

Fatah and Hamas are reportedly close to a deal on a post-war government for Gaza, marking a potential end to Hamas’ 17-year rule. The agreement would establish a committee of 12-15 politically unaligned technocrats with authority over issues of the economy, education, health, humanitarian aid, and reconstruction.

Globally, one in five people identify as neurodivergent, an umbrella term that refers to variances in how the brain processes information. A new collaborative study between Microsoft and Ernst & Young reveals insights into how AI-powered tools like Microsoft 365 Copilot are transforming experiences for the neurodiverse in the workplace. The study involving over 300 neurodivergent or disabled employees found that 91% consider Copilot helpful for communication, 85% believe it creates a more inclusive workplace, and 76% say it aids their work performance. This study is part of Microsoft’s ongoing work to increase understanding of how Copilot and other Microsoft tools can improve the workplace. Read more here.

US President Joe Biden shakes hands with Angolan President João Lourenço at the Presidential Palace in Luanda, Angola, on Dec. 3, 2024.
REUTERS/Elizabeth Frantz

With seven weeks left as US president, Joe Biden was in Angola on Tuesday to meet with President João Lourenço. It's the very first visit of a US president to this former Portuguese colony.

Courtesy of Midjourney

Throughout Joe Biden’s presidency, the Commerce Department has gradually tightened its chokehold on China’s access to semiconductors needed to access, train, and build artificial intelligence. It just announced its “strongest controls ever," prompting China to respond in kind with restrictions of its own that send a signal to President-elect Donald Trump.