Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Online violence means real-world danger for women in politics like EU's Lucia Nicholsonová
Content Warning: This clip contains sensitive language.
In a compelling dialogue from a GZERO Global Stage discussion on gender equality in the age of AI, Lucia Nicholsonová, former Slovak National Assembly vice president and current member of European Parliament for Slovakia, recounts her harrowing personal experiences with disinformation campaigns and gendered hate speech online.
Ms. Nicholsonová read example messages she receives online, such as, "Damn you and your whole family. I wish you all die of cancer."
She also has faced false accusations of past criminal activity through deliberate online misinformation campaigns, which she says led to endured public humiliation and threats, even experiencing strangers spitting on her in the streets. These attacks were fueled by misogyny and prejudice and took a toll on her mental well-being and family life.
As Ms. Nicholsonová recalls, “It was a real trauma because I mean, at some point I wasn't able to go out of my home because I felt so threatened.”
The conversation was presented by GZERO in partnership with Microsoft and the UN Foundation. The Global Stage series convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: What impact will AI have on gender equality?
- Atwood and Musk agree on Online Harms Act ›
- Facebook allows "lies laced with anger and hate" to spread faster than facts, says journalist Maria Ressa ›
- Facebook allows "lies laced with anger and hate" to spread faster than facts, says journalist Maria Ressa ›
- AI and Canada's proposed Online Harms Act ›
- What impact will AI have on gender equality? - GZERO Media ›
AI and Canada's proposed Online Harms Act
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
- When AI makes mistakes, who can be held responsible? ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- EU AI regulation efforts hit a snag ›
- Online violence means real-world danger for women in politics - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
Fighting online hate: Global internet governance through shared values
After a terrorist attack on a mosque in Christchurch, New Zealand was live-streamed on the internet in 2019, the Christchurch Call was launched to counter the increasing weaponization of the internet and to ensure that emerging tech is harnessed for good.
In a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, former New Zealand Prime Minster Dame Jacinda Ardern discussed the challenges and disparities inherent in the ever-evolving digital age, ranging from unrestricted online platforms in liberal democracies to severe content limitations in certain countries.
“If you look beyond just liberal democracies, on the one hand you have the discussion about free speech and the view that some hold around being able to use online platforms to publish just about anything. Then in some countries, the inability to publish anything at all,” said Ardern.
In her new role, as Special Envoy for the Christchurch Call, she advocated for departing from conventional country-centric strategies and proposed a foundation built upon shared values instead, prioritizing the safeguarding of human rights and the preservation of an open internet over national interests. “Let's establish the value set, the common problem identification to bring everyone around the table.”
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
- Hearing the Christchurch Call ›
- Facebook allows "lies laced with anger and hate" to spread faster than facts, says journalist Maria Ressa ›
- What We’re Watching: Ardern's shock exit, sights on Crimea, Bibi’s budding crisis, US debt ceiling chaos ›
- Jacinda Ardern on the Christchurch Call: How New Zealand led a movement ›
How tech was used to harm democracy on January 6
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses trends in big tech, privacy protection and cyberspace:
What is the tech legacy of the first anniversary of the January 6th storming of the Capitol?
Now, one is that it is so clear that there is no such thing as an online world that's separated from our offline lives. We see democracy being harmed in new ways and speech fueling actions in the streets. And this is not just a speech issue, but data harvesting and micro-targeting are giving those hate speech calls wings online.
Secondly, is that there is still so much we don't know. We learn new things every week, such as this week when Brookings researchers showed how podcasts were used to fan the flames of fraud claims and violence, and the Washington Post and ProPublica this week published their analysis of 650,000 Facebook posts, that was about 10,000 a week, leading up to the storming of the Capitol and their valuable work comes a year after the failed coup attempt, reminding us of the opacity of the workings of tech companies. Facebook itself has actually refused to turn over the documents that the congressional investigative committee has asked for.
Now, while the dots are still being connected on January 6th and the events that unfolded, we already see plenty of new threats, plots, and lies to hurt democratic rights being devised. Now I hope today and this week, everyone pauses and reflects, remembering that there are no winners when democracy itself is lost.