Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
Midjourney quiets down politics
Everything is political for GZERO, but AI image generator Midjourney would rather avoid the drama. The company has begun blocking the creation of images featuring President Joe Biden and former President Donald Trump in the run-up to the US presidential election in November.
“I don’t really care about political speech,” said Midjourney CEO David Holz in an event with users last week. “That’s not the purpose of Midjourney. It’s not that interesting to me. That said, I also don’t want to spend all of my time trying to police political speech. So we’re going to have to put our foot down on it a bit.”
Holz’s statement comes just weeks after the Center for Countering Digital Hate issued a report showing it was able to use popular AI image generators to create election disinformation in 41% of its attempts. Midjourney performed worst out of all of the tools the group tested with researchers able to generate these images 65% of the time.
Examples included images of Joe Biden sick in a hospital bed, Donald Trump in a jail cell, and a box of thrown-out ballots in a dumpster. GZERO tried to generate a simple image of Biden and Trump shaking hands and received an error message: “Sorry! Our AI moderator thinks this prompt is probably against our community standards.”
For Midjourney, it seems like they simply don’t want to be in the business of policing what political speech is acceptable and what isn’t — so they’re taking the easy way out and turning the nozzle off entirely. OpenAI’s tools have long been hesitant to wade into political waters, and stark criticism has come for Microsoft and Google for their sensitivity failures about historical accuracy and offensive imagery. Why would Midjourney take that risk?
AI labels are coming to Instagram and Facebook. Will they work?
Sir Nick Clegg, president of global affairs at Meta, the parent company of Facebook, Instagram, and Threads, announced Tuesday their platforms would begin labeling AI-generated images.
Meta is working with AI image generators like Midjourney and Shutterstock to add metadata to images that have been created by artificial intelligence, which will then automatically trigger a label when posted. Clegg framed it as a crucial safety measure and said the company would build the technology over the next year.
There are some drawbacks. First, the technology won’t work on video or audio yet, but Clegg says Meta will take down any unlabelled AI-generated clip that “creates a particularly high risk of materially deceiving the public on a matter of importance.”
Second, even still images may be able to get around Meta’s detector by doing something as simple as processing it through photo editing software to generate new metadata, according to experts.
And as far as AI-generated text, Clegg says it would be pointless to try to identify and label it all. “That ship has sailed,” he told Reuters.
Taylor Swift AI images & the rise of the deepfakes problem
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines how Taylor Swift's plight with AI deepfake porn sheds light on the complexities of the information ecosystem in the biggest election year ever, which includes the US elections.
Okay, so full disclosure, I don't love the NFL and my ten-year-old son is more into Ed Sheeran than Taylor Swift, so she hasn't yet flooded our household. However, when one of the most famous people in the world is caught in a deepfake porn attack driven by a right-wing conspiracy theory, forcing one of the largest platforms in the world to shut down all Taylor Swift-related content, well, now you have my attention. But what are we to make of all this?
First thing I think is it shows how crazy this US election cycle is going to be. The combination of new AI capabilities, unregulated platforms, a flood of opaque super PAC money, and a candidate who's perfectly willing to fuel conspiracy theories means the information ecosystem this year is going to be a mess.
Second, however, I think we're starting to see some of the policy levers that could be pulled to address this problem. The Defiance Act, tabled in the Senate last week, gives victims of deepfakes the right to sue the people who created them. The Preventing Deepfakes of Intimate Images Act, stuck in the House currently, goes a step further and puts criminal liability on the people who create deepfakes.
Third, though, I think this shows how we need to regulate platforms, not just the AI that creates the deepfakes, because the main problem with this content is not the ability to create them, we've had that for a long time. It's the ability to disseminate them broadly to a large number of people. That's where the real harm lies. For example, one of these Taylor Swift videos was viewed 45 million times and stayed up for 17 hours before it was removed by Twitter. And the #TaylorSwiftAI was boosted as a trending topic by Twitter, meaning it was algorithmically amplified, not just posted and disseminated by users. So what I think we might start seeing here is a slightly more nuanced conversation about the liability protection that we give to platforms. This might mean that they are now liable for content that is either algorithmically amplified or potentially content that is created by AI.
All that said, I would not hold my breath for the US to do anything here. And probably, for the content regulations we may need, we're going to need to look to Europe, to the UK, to Australia, and this year to Canada.
So what should we actually be watching for? Well, one thing I would look for is how the platforms themselves are going to respond to what is both now an unavoidable problem, and one that has certainly gotten the attention of advertisers. When Elon Musk took over Twitter, he decimated their content moderation team. But Twitter's now announced that they're going to start rehiring one. And you better believe they're doing this not because of the threat of the US Senate but because of the threat of their biggest advertisers. Advertisers do not want their content but put aside politically motivated, deepfake pornography of incredibly popular people. So that's what I'd be watching for here. How are the platforms themselves going to respond to what is a very clear problem, in part as a function of how they've designed their platforms and their companies?
I'm Taylor Owen, and thanks for watching.
- AI at the tipping point: danger to information, promise for creativity ›
- Hard Numbers: Faking Taylor, Powering Perplexity, Keying change, Risking extinction, Embracing AI in NY ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
- Deepfakes and dissent: How AI makes the opposition more dangerous - GZERO Media ›
- Voters beware: Elections and the looming threat of deepfakes - GZERO Media ›
- AI and Canada's proposed Online Harms Act - GZERO Media ›
- AI vs. truth: Battling deepfakes amid 2024 elections - GZERO Media ›
How AI is changing the world of work
The AI revolution is coming… fast. But what does that mean for your job? GZERO World with Ian Bremmer takes a deep dive into this exciting and anxiety-inducing new era of generative artificial intelligence. Generative AI tools like ChatGPT and Midjourney have the potential to increase productivity and prosperity massively, but there are also fears of job replacement and unequal access to technology.
Ian Bremmer sat down with tech expert Azeem Azhar and organizational psychologist Adam Grant on the sidelines of the World Economic Forum in Davos, Switzerland to hear how CEOs are already incorporating AI into their businesses, what the future of work might look like as AI tools become more advanced, and what the experts are still getting wrong about the most powerful technology to hit the workforce since the personal computer.
“One of the dangers of last year was that people started to lose their faith in technology and technology is what provides prosperity,” Azhar says, “We need to have more grownup conversations, more civil conversations, more moderate conversations about what that reality is.
Catch GZERO World with Ian Bremmer on US public television every week or on US public television. Check local listings.
- AI at the tipping point: danger to information, promise for creativity ›
- Hard Numbers: Must-have accessory?, Americans on AI, Bill Gates’ prediction, Massive paychecks, Airbnb's big bet ›
- Political fortunes, job futures, and billions hang in the balance amid labor unrest ›
- Larry Summers: Which jobs will AI replace? ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- How neurotech could enhance our brains using AI - GZERO Media ›
- Can AI help doctors act more human? - GZERO Media ›
AI and the future of work: Experts Azeem Azhar and Adam Grant weigh in
Listen:What does this new era of generative artificial intelligence mean for the future of work? On the GZERO World Podcast, Ian Bremmer sits down with tech expert Azeem Azhar and organizational psychologist Adam Grant on the sidelines of the World Economic Forum in Davos, Switzerland, to learn more about how this exciting and anxiety-inducing technology is already changing our lives, what comes next, and what the experts are still getting wrong about the most powerful technology to hit the workforce since the personal computer.
The rapid advances in generative AI tools like ChatGPT, which has only been public for a little over a year, are stirring up excitement and deep anxieties about how we work and if we work. Artificial intelligence can potentially increase productivity and prosperity massively, but there are fears of job replacement and unequal access to technology. Will AI be the productivity booster CEOs hope for, the job killer employees fear?
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.Ian Explains: How will AI impact the workplace?
Generative AI could increase productivity and prosperity... but also replace jobs and increase global inequality.
As long as humans have been inventing new technology, they’ve worried it will replace their jobs. From Ancient Greece to Elizabethan England, people feared machines and automation would eliminate the need for human labor. Hundreds of years later, the same conversation is happening around artificial intelligence—the most powerful technology to hit the workforce since the personal computer.
On Ian Explains, Ian Bremmer looks at the history of human anxiety about being replaced by machines and the impact this new AI era will have on today’s workers. Will AI be the productivity booster CEOs hope for, or the job-killer employees fear? Experts are torn. Goldman Sachs predicts a $7 trillion increase in global GDP over the next decade from advances in AI, but the International Monetary Fund estimates that AI will negatively impact 40% of all jobs globally in the same time frame.
Human capital has been the powerhouse of economic growth for most of history, but the unprecedented pace of advances in AI is stirring up excitement and deep anxieties about not only how we work but if we’ll work at all.
Watch the upcoming episode of GZERO World with Ian Bremmer on US public television this weekend (check local listings) and at gzeromedia.com/gzeroworld.
- One big thing missing from the AI conversation | Zeynep Tufekci ›
- Political fortunes, job futures, and billions hang in the balance amid labor unrest ›
- Larry Summers: Which jobs will AI replace? ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney ›
- What impact will AI have on gender equality? - GZERO Media ›
- Can AI help doctors act more human? - GZERO Media ›
Will Taylor Swift's AI deepfake problems prompt Congress to act?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about how Taylor Swift's traumatic experience with AI deepfake porn could be the turning point in passing laws that protect individuals from harmful Generative AI practices, thanks to the pop star's popularity.
Today I want to talk about Taylor Swift, and that may suggest that we are going to have a lighthearted episode, but that's not the case. On the contrary, because the pop icon has been the subject of one of the most traumatizing experiences that anyone can live through online in relation to AI and new technology.
Taylor Swift was the victim of the creation of non-consensual sexually explicit content or a pornographic deepfake. Now, the term deepfake may ring a bell because we've talked about the more convincing messages that generative AI can create in the context of election manipulation, disinformation. And that is indeed a grave concern of mine. But when you look at the numbers, the vast majority of deepfakes online are of a pornographic nature. And when those are non-consensual, imagine, for example, when it's not a pop icon that everybody knows and can come to the rescue for, but a young teenager who is faced with a deepfake porn image of themselves, classmates sharing it, you can well imagine the deep trauma and stress this causes, and we know that this kind of practice has unfortunately led to self-harm among young people as well.
So, it is high time that tech companies do more, take more responsibility for preventing this kind of terrible nonconsensual use of their products and the ensuing sharing and virality online. So, if there's one silver lining to this otherwise very depressing experience of Taylor Swift than it is that she and her followers may be able to do what few have managed to succeed in, which is to move Congress to pass legislation. There seems to be bipartisan movement and all I can hope is that it will lead to better protection of people from the worst practices of generative AI.
- Making rules for AI … before it’s too late ›
- Can watermarks stop AI deception? ›
- Deepfake porn targets high schoolers ›
- Regulate AI, but how? The US isn’t sure ›
- Taylor Swift AI images & the rise of deepfakes problem - GZERO Media ›
- Voters beware: Elections and the looming threat of deepfakes - GZERO Media ›