Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Taylor Swift AI images & the rise of the deepfakes problem
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines how Taylor Swift's plight with AI deepfake porn sheds light on the complexities of the information ecosystem in the biggest election year ever, which includes the US elections.
Okay, so full disclosure, I don't love the NFL and my ten-year-old son is more into Ed Sheeran than Taylor Swift, so she hasn't yet flooded our household. However, when one of the most famous people in the world is caught in a deepfake porn attack driven by a right-wing conspiracy theory, forcing one of the largest platforms in the world to shut down all Taylor Swift-related content, well, now you have my attention. But what are we to make of all this?
First thing I think is it shows how crazy this US election cycle is going to be. The combination of new AI capabilities, unregulated platforms, a flood of opaque super PAC money, and a candidate who's perfectly willing to fuel conspiracy theories means the information ecosystem this year is going to be a mess.
Second, however, I think we're starting to see some of the policy levers that could be pulled to address this problem. The Defiance Act, tabled in the Senate last week, gives victims of deepfakes the right to sue the people who created them. The Preventing Deepfakes of Intimate Images Act, stuck in the House currently, goes a step further and puts criminal liability on the people who create deepfakes.
Third, though, I think this shows how we need to regulate platforms, not just the AI that creates the deepfakes, because the main problem with this content is not the ability to create them, we've had that for a long time. It's the ability to disseminate them broadly to a large number of people. That's where the real harm lies. For example, one of these Taylor Swift videos was viewed 45 million times and stayed up for 17 hours before it was removed by Twitter. And the #TaylorSwiftAI was boosted as a trending topic by Twitter, meaning it was algorithmically amplified, not just posted and disseminated by users. So what I think we might start seeing here is a slightly more nuanced conversation about the liability protection that we give to platforms. This might mean that they are now liable for content that is either algorithmically amplified or potentially content that is created by AI.
All that said, I would not hold my breath for the US to do anything here. And probably, for the content regulations we may need, we're going to need to look to Europe, to the UK, to Australia, and this year to Canada.
So what should we actually be watching for? Well, one thing I would look for is how the platforms themselves are going to respond to what is both now an unavoidable problem, and one that has certainly gotten the attention of advertisers. When Elon Musk took over Twitter, he decimated their content moderation team. But Twitter's now announced that they're going to start rehiring one. And you better believe they're doing this not because of the threat of the US Senate but because of the threat of their biggest advertisers. Advertisers do not want their content but put aside politically motivated, deepfake pornography of incredibly popular people. So that's what I'd be watching for here. How are the platforms themselves going to respond to what is a very clear problem, in part as a function of how they've designed their platforms and their companies?
I'm Taylor Owen, and thanks for watching.
- AI at the tipping point: danger to information, promise for creativity ›
- Hard Numbers: Faking Taylor, Powering Perplexity, Keying change, Risking extinction, Embracing AI in NY ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
- Deepfakes and dissent: How AI makes the opposition more dangerous - GZERO Media ›
- Voters beware: Elections and the looming threat of deepfakes - GZERO Media ›
- AI and Canada's proposed Online Harms Act - GZERO Media ›
- AI vs. truth: Battling deepfakes amid 2024 elections - GZERO Media ›
Will Taylor Swift's AI deepfake problems prompt Congress to act?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about how Taylor Swift's traumatic experience with AI deepfake porn could be the turning point in passing laws that protect individuals from harmful Generative AI practices, thanks to the pop star's popularity.
Today I want to talk about Taylor Swift, and that may suggest that we are going to have a lighthearted episode, but that's not the case. On the contrary, because the pop icon has been the subject of one of the most traumatizing experiences that anyone can live through online in relation to AI and new technology.
Taylor Swift was the victim of the creation of non-consensual sexually explicit content or a pornographic deepfake. Now, the term deepfake may ring a bell because we've talked about the more convincing messages that generative AI can create in the context of election manipulation, disinformation. And that is indeed a grave concern of mine. But when you look at the numbers, the vast majority of deepfakes online are of a pornographic nature. And when those are non-consensual, imagine, for example, when it's not a pop icon that everybody knows and can come to the rescue for, but a young teenager who is faced with a deepfake porn image of themselves, classmates sharing it, you can well imagine the deep trauma and stress this causes, and we know that this kind of practice has unfortunately led to self-harm among young people as well.
So, it is high time that tech companies do more, take more responsibility for preventing this kind of terrible nonconsensual use of their products and the ensuing sharing and virality online. So, if there's one silver lining to this otherwise very depressing experience of Taylor Swift than it is that she and her followers may be able to do what few have managed to succeed in, which is to move Congress to pass legislation. There seems to be bipartisan movement and all I can hope is that it will lead to better protection of people from the worst practices of generative AI.
- Making rules for AI … before it’s too late ›
- Can watermarks stop AI deception? ›
- Deepfake porn targets high schoolers ›
- Regulate AI, but how? The US isn’t sure ›
- Taylor Swift AI images & the rise of deepfakes problem - GZERO Media ›
- Voters beware: Elections and the looming threat of deepfakes - GZERO Media ›