Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
We’re Sora-ing, flying
OpenAI, the buzzy startup behind the ChatGPT chatbot, has begun previewing its next tool: Sora. Just like OpenAI’s DALL-E allows users to type out a text prompt and generate an image, Sora will give customers the same ability with video.
Want a cinematic clip of dinosaurs walking through Central Park? Sure. How about kangaroos hopping around Mars? Why not? These are the kinds of imaginative things that Sora can theoretically generate with just a short prompt. The software has only been tested by a select group of people, and the reviews so far are mixed. It’s groundbreaking but often struggles with things like scale and glitchiness.
AI-generated images have already posed serious problems, including the spread of photorealistic deep fake pornography and convincing-but-fake political images. (For example, Florida Gov. Ron DeSantis’ presidential campaign used AI-generated images of former President Donald Trump hugging Anthony Fauci in a video, and the Republican National Committee did something similar with fake images of Joe Biden.)
While users may not yet have access to movie-quality video generators, they soon might — something that’ll almost certainly supercharge the issues presented by AI-generated images. The World Economic Forum recently named disinformation, especially that caused by artificial intelligence, as the biggest global short-term risk. “Misinformation and disinformation may radically disrupt electoral processes in several economies over the next two years,” according to the WEF. “A growing distrust of information, as well as media and governments as sources, will deepen polarized views – a vicious cycle that could trigger civil unrest and possibly confrontation.”
Eurasia Group, GZERO’s parent company, also named “Ungoverned AI” as one of its Top Risks for 2024. “In a year when four billion people head to the polls, generative AI will be used by domestic and foreign actors — notably Russia — to influence electoral campaigns, stoke division, undermine trust in democracy, and sow political chaos on an unprecedented scale,” according to the report. “A crisis in global democracy is today more likely to be precipitated by AI-created and algorithm-driven disinformation than any other factor.”Taylor Swift AI images & the rise of the deepfakes problem
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines how Taylor Swift's plight with AI deepfake porn sheds light on the complexities of the information ecosystem in the biggest election year ever, which includes the US elections.
Okay, so full disclosure, I don't love the NFL and my ten-year-old son is more into Ed Sheeran than Taylor Swift, so she hasn't yet flooded our household. However, when one of the most famous people in the world is caught in a deepfake porn attack driven by a right-wing conspiracy theory, forcing one of the largest platforms in the world to shut down all Taylor Swift-related content, well, now you have my attention. But what are we to make of all this?
First thing I think is it shows how crazy this US election cycle is going to be. The combination of new AI capabilities, unregulated platforms, a flood of opaque super PAC money, and a candidate who's perfectly willing to fuel conspiracy theories means the information ecosystem this year is going to be a mess.
Second, however, I think we're starting to see some of the policy levers that could be pulled to address this problem. The Defiance Act, tabled in the Senate last week, gives victims of deepfakes the right to sue the people who created them. The Preventing Deepfakes of Intimate Images Act, stuck in the House currently, goes a step further and puts criminal liability on the people who create deepfakes.
Third, though, I think this shows how we need to regulate platforms, not just the AI that creates the deepfakes, because the main problem with this content is not the ability to create them, we've had that for a long time. It's the ability to disseminate them broadly to a large number of people. That's where the real harm lies. For example, one of these Taylor Swift videos was viewed 45 million times and stayed up for 17 hours before it was removed by Twitter. And the #TaylorSwiftAI was boosted as a trending topic by Twitter, meaning it was algorithmically amplified, not just posted and disseminated by users. So what I think we might start seeing here is a slightly more nuanced conversation about the liability protection that we give to platforms. This might mean that they are now liable for content that is either algorithmically amplified or potentially content that is created by AI.
All that said, I would not hold my breath for the US to do anything here. And probably, for the content regulations we may need, we're going to need to look to Europe, to the UK, to Australia, and this year to Canada.
So what should we actually be watching for? Well, one thing I would look for is how the platforms themselves are going to respond to what is both now an unavoidable problem, and one that has certainly gotten the attention of advertisers. When Elon Musk took over Twitter, he decimated their content moderation team. But Twitter's now announced that they're going to start rehiring one. And you better believe they're doing this not because of the threat of the US Senate but because of the threat of their biggest advertisers. Advertisers do not want their content but put aside politically motivated, deepfake pornography of incredibly popular people. So that's what I'd be watching for here. How are the platforms themselves going to respond to what is a very clear problem, in part as a function of how they've designed their platforms and their companies?
I'm Taylor Owen, and thanks for watching.
- AI at the tipping point: danger to information, promise for creativity ›
- Hard Numbers: Faking Taylor, Powering Perplexity, Keying change, Risking extinction, Embracing AI in NY ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
- Deepfakes and dissent: How AI makes the opposition more dangerous - GZERO Media ›
- Voters beware: Elections and the looming threat of deepfakes - GZERO Media ›
- AI and Canada's proposed Online Harms Act - GZERO Media ›
- AI vs. truth: Battling deepfakes amid 2024 elections - GZERO Media ›
How AI is changing the world of work
The AI revolution is coming… fast. But what does that mean for your job? GZERO World with Ian Bremmer takes a deep dive into this exciting and anxiety-inducing new era of generative artificial intelligence. Generative AI tools like ChatGPT and Midjourney have the potential to increase productivity and prosperity massively, but there are also fears of job replacement and unequal access to technology.
Ian Bremmer sat down with tech expert Azeem Azhar and organizational psychologist Adam Grant on the sidelines of the World Economic Forum in Davos, Switzerland to hear how CEOs are already incorporating AI into their businesses, what the future of work might look like as AI tools become more advanced, and what the experts are still getting wrong about the most powerful technology to hit the workforce since the personal computer.
“One of the dangers of last year was that people started to lose their faith in technology and technology is what provides prosperity,” Azhar says, “We need to have more grownup conversations, more civil conversations, more moderate conversations about what that reality is.
Catch GZERO World with Ian Bremmer on US public television every week or on US public television. Check local listings.
- AI at the tipping point: danger to information, promise for creativity ›
- Hard Numbers: Must-have accessory?, Americans on AI, Bill Gates’ prediction, Massive paychecks, Airbnb's big bet ›
- Political fortunes, job futures, and billions hang in the balance amid labor unrest ›
- Larry Summers: Which jobs will AI replace? ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- How neurotech could enhance our brains using AI - GZERO Media ›
- Can AI help doctors act more human? - GZERO Media ›
AI and the future of work: Experts Azeem Azhar and Adam Grant weigh in
Listen:What does this new era of generative artificial intelligence mean for the future of work? On the GZERO World Podcast, Ian Bremmer sits down with tech expert Azeem Azhar and organizational psychologist Adam Grant on the sidelines of the World Economic Forum in Davos, Switzerland, to learn more about how this exciting and anxiety-inducing technology is already changing our lives, what comes next, and what the experts are still getting wrong about the most powerful technology to hit the workforce since the personal computer.
The rapid advances in generative AI tools like ChatGPT, which has only been public for a little over a year, are stirring up excitement and deep anxieties about how we work and if we work. Artificial intelligence can potentially increase productivity and prosperity massively, but there are fears of job replacement and unequal access to technology. Will AI be the productivity booster CEOs hope for, the job killer employees fear?
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.Ian Explains: How will AI impact the workplace?
Generative AI could increase productivity and prosperity... but also replace jobs and increase global inequality.
As long as humans have been inventing new technology, they’ve worried it will replace their jobs. From Ancient Greece to Elizabethan England, people feared machines and automation would eliminate the need for human labor. Hundreds of years later, the same conversation is happening around artificial intelligence—the most powerful technology to hit the workforce since the personal computer.
On Ian Explains, Ian Bremmer looks at the history of human anxiety about being replaced by machines and the impact this new AI era will have on today’s workers. Will AI be the productivity booster CEOs hope for, or the job-killer employees fear? Experts are torn. Goldman Sachs predicts a $7 trillion increase in global GDP over the next decade from advances in AI, but the International Monetary Fund estimates that AI will negatively impact 40% of all jobs globally in the same time frame.
Human capital has been the powerhouse of economic growth for most of history, but the unprecedented pace of advances in AI is stirring up excitement and deep anxieties about not only how we work but if we’ll work at all.
Watch the upcoming episode of GZERO World with Ian Bremmer on US public television this weekend (check local listings) and at gzeromedia.com/gzeroworld.
- One big thing missing from the AI conversation | Zeynep Tufekci ›
- Political fortunes, job futures, and billions hang in the balance amid labor unrest ›
- Larry Summers: Which jobs will AI replace? ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney ›
- What impact will AI have on gender equality? - GZERO Media ›
- Can AI help doctors act more human? - GZERO Media ›
Will Taylor Swift's AI deepfake problems prompt Congress to act?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about how Taylor Swift's traumatic experience with AI deepfake porn could be the turning point in passing laws that protect individuals from harmful Generative AI practices, thanks to the pop star's popularity.
Today I want to talk about Taylor Swift, and that may suggest that we are going to have a lighthearted episode, but that's not the case. On the contrary, because the pop icon has been the subject of one of the most traumatizing experiences that anyone can live through online in relation to AI and new technology.
Taylor Swift was the victim of the creation of non-consensual sexually explicit content or a pornographic deepfake. Now, the term deepfake may ring a bell because we've talked about the more convincing messages that generative AI can create in the context of election manipulation, disinformation. And that is indeed a grave concern of mine. But when you look at the numbers, the vast majority of deepfakes online are of a pornographic nature. And when those are non-consensual, imagine, for example, when it's not a pop icon that everybody knows and can come to the rescue for, but a young teenager who is faced with a deepfake porn image of themselves, classmates sharing it, you can well imagine the deep trauma and stress this causes, and we know that this kind of practice has unfortunately led to self-harm among young people as well.
So, it is high time that tech companies do more, take more responsibility for preventing this kind of terrible nonconsensual use of their products and the ensuing sharing and virality online. So, if there's one silver lining to this otherwise very depressing experience of Taylor Swift than it is that she and her followers may be able to do what few have managed to succeed in, which is to move Congress to pass legislation. There seems to be bipartisan movement and all I can hope is that it will lead to better protection of people from the worst practices of generative AI.
- Making rules for AI … before it’s too late ›
- Can watermarks stop AI deception? ›
- Deepfake porn targets high schoolers ›
- Regulate AI, but how? The US isn’t sure ›
- Taylor Swift AI images & the rise of deepfakes problem - GZERO Media ›
- Voters beware: Elections and the looming threat of deepfakes - GZERO Media ›
Hard Numbers: Profitable prompts, Happy birthday ChatGPT, AI goes superhuman, Office chatbots, Self-dealing at OpenAI, Saying Oui to Mistral
$200,000: Want an image of a dog? DALL-E could spit out any breed. Want an Australian shepherd with a blue merle coat and heterochromia in front of a backdrop of lush, green hills? Now you’re starting to write like a prompt engineer, and that could be lucrative. Companies are paying up to $200,000 for full-time AI “prompt engineering” roles, placing a premium on this newfangled skill. It's all about descriptive fine-tuning of language to get desired results.
1: Can you believe it’s only been one year since ChatGPT launched? It all started when OpenAI CEO Sam Altman tweeted, “today we launched ChatGPT. Try talking with it here.” Since then, the chatbot has claimed hundreds of millions of users.
56: Skynet, anyone? No thanks, say 56% of Americans, who are concerned with AI gaining “superhuman capabilities” and support policies to prevent it, according to a new poll by the AI Policy Institute.
$51 million: In 2019, OpenAI reportedly agreed to buy $51 million worth of chips from Rain, a “neuromorphic” chip-making startup, meant to mirror the activity of the human brain. Why is this making news now? According to Wired, OpenAI’s Sam Altman personally invested $1 million in the company.
$20: You work at a big company and need help sifting through sprawling databases for a single piece of information. Enter AI. Amazon’s new chatbot, called Q, costs $20 a month and aims to help with tasks like “summarizing strategy documents, filling out internal support tickets, and answering questions about company policy.” It’s Amazon’s answer to Microsoft’s work chatbot, Copilot, released in September.
$2 billion: French AI startup Mistral is about to close a new funding round that would value it at $2 billion. The new round, worth $487 million, includes investment from venture capital giant Andreessen Horowitz, along with chipmaker NVIDIA and the business software firm Salesforce. Mistral, founded less than a year ago, boasts an open-source large language model that it hopes will rival OpenAI’s (ironically) closed-source model, GPT4. What’s the difference? Open-source LLMs publish their source code so it can be studied and third-party developers can build off of it.