Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about how Taylor Swift's traumatic experience with AI deepfake porn could be the turning point in passing laws that protect individuals from harmful Generative AI practices, thanks to the pop star's popularity.
Today I want to talk about Taylor Swift, and that may suggest that we are going to have a lighthearted episode, but that's not the case. On the contrary, because the pop icon has been the subject of one of the most traumatizing experiences that anyone can live through online in relation to AI and new technology.
Taylor Swift was the victim of the creation of non-consensual sexually explicit content or a pornographic deepfake. Now, the term deepfake may ring a bell because we've talked about the more convincing messages that generative AI can create in the context of election manipulation, disinformation. And that is indeed a grave concern of mine. But when you look at the numbers, the vast majority of deepfakes online are of a pornographic nature. And when those are non-consensual, imagine, for example, when it's not a pop icon that everybody knows and can come to the rescue for, but a young teenager who is faced with a deepfake porn image of themselves, classmates sharing it, you can well imagine the deep trauma and stress this causes, and we know that this kind of practice has unfortunately led to self-harm among young people as well.
So, it is high time that tech companies do more, take more responsibility for preventing this kind of terrible nonconsensual use of their products and the ensuing sharing and virality online. So, if there's one silver lining to this otherwise very depressing experience of Taylor Swift than it is that she and her followers may be able to do what few have managed to succeed in, which is to move Congress to pass legislation. There seems to be bipartisan movement and all I can hope is that it will lead to better protection of people from the worst practices of generative AI.
- Making rules for AI … before it’s too late ›
- Can watermarks stop AI deception? ›
- Deepfake porn targets high schoolers ›
- Regulate AI, but how? The US isn’t sure ›
- Taylor Swift AI images & the rise of deepfakes problem - GZERO Media ›
- Voters beware: Elections and the looming threat of deepfakes - GZERO Media ›