Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
AI's role in the Israel-Hamas war so far
Artificial intelligence is changing the world, and our new video series GZERO AI explores what it all means for you—from disinformation and regulation to the economic and political impact. Co-hosted by Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, and by Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence and former European Parliamentarian, this weekly video series will help you keep up and make sense of the latest news on the AI revolution.
In the first episode of the series, Taylor Owen takes a look at how artificial intelligence is shaping the war between Israel and Hamas.
As the situation in the Middle East just continues to escalate, today we're asking how is artificial intelligence shaping the war between Israel and Hamas? The short answer is that not as many expected it might. I think there's two cautions about the power of AI here, and one place where AI has been shown to really matter. The first caution is on the value of predictive AI. For years, many have been arguing that AI might not just help us understand the world as it is but might actually be able to help us predict future events. Nowhere has this been more the case than in the worlds of national security and policing.
Now, Gaza happens to be one of the most surveyed regions in the world. The use of drones, facial recognition, border checkpoints, and phone tapping have allowed the Israeli government to collect vast amounts of data about the Gazan population. Add to this, the fact that the director of the Israeli Defense Ministry has said that Israel is about to become an AI superpower, and one would think that the government might have had the ability to predict such events. But on October 7th, this was notably not the case. The government, the military, and Israeli citizens themselves were taken by surprise by this attack.
The reality, of course, is however powerful the AI might be, it is only as good as the data that's fed into it, and often if the data is biased or just plain wrong, so will the predictive capacity. So I think we need to be really cautious, particularly about the sales pitches being made by the companies selling these predictive tools to our policing and our national security services. The certainty which with they're doing so, I think, needs to be questioned.
The second caution I would add is on the role that AI plays in the creation of misinformation. Don't get me wrong, there's been a ton of it in this conflict, but it hasn't really been the synthetic media or the deep fakes that many feared would be a big problem in events like this. Instead, the misinformation has been low tech. It's been photos and videos from other events taken out of context and displayed as if they were from this one. It's been cheap fakes, not deep fakes. Now, there have been some cases even where AI deepfake detection tools, which have been rolled out in response to the problem of deep fakes, have actually falsely identified AI images as being created by AI. In this case, the threat of deep fakes is causing more havoc than the deep fakes themselves.
Finally, though, I think there is a place where AI is causing real harm in this conflict, and that is on social media. Our Twitter and our Facebook and our TikTok feeds are being shaped by artificially intelligent algorithms. And too often than not, these algorithms reinforce our biases and fuel our collective anger. The world seen through content that only makes us angry is just fundamentally a distorted one. And more broadly, I think calls for reigning in social media, whether by the companies themselves or through regulation, are being replaced with opaque and ill-defined notions of AI governance. And don't get me wrong, AI policy is important, but it is the social media ecosystem that is still causing real harm. We can't take our eye off of that policy ball.
I'm Taylor Owen, and thanks for watching.
- Is Israel ready for the nightmare waiting in Gaza? ›
- Lessons from Gaza: Think before you Tweet ›
- Be very scared of AI + social media in politics ›
- The AI power paradox: Rules for AI's power ›
- The OpenAI-Sam Altman drama: Why should you care? - GZERO Media ›
- Gemini AI controversy highlights AI racial bias challenge - GZERO Media ›
- Israel's Lavender: What could go wrong when AI is used in military operations? - GZERO Media ›
Ian Explains: The dark side of AI
Hollywood has long warned us about a future where humans and machines become indistinguishable, and we might be closer than we think. OpenAI's Dall-E-2 can create images from text prompts, like astronauts riding horses in space. And their ChatGPT language model generates human-like text, blurring the lines between sci-fi and reality. By 2023, AI might even pass the Turing test, which for decades has measured a machine's human intelligence.
While generative AI has the power to solve major global challenges, it also presents dangers, Ian Bremmer explains on GZERO World.
Authoritarian governments can use it to increase surveillance and spread misinformation. In democracies, AI can create and spread large volumes of misinformation that make it difficult to distinguish fact from fiction.
We're at a critical juncture. How will generative AI change our lives? Will the ultimate movie be a rom-com or a horror film?
Watch the GZERO World episode: The AI arms race begins: Scott Galloway’s optimism & warnings
- Can we control AI before it controls us? ›
- Artificial intelligence from Ancient Greece to 2021 ›
- Be more worried about artificial intelligence ›
- The transformative potential of artificial intelligence ›
- Emotional AI: More harm than good? - GZERO Media ›
- GZERO World with Ian Bremmer: Season 6 preview - GZERO Media ›
Was Elon Musk right about Twitter's bots?
The world's richest man is trying to get out of buying Twitter because the social media platform has a lot more fake accounts than he thought.
But does he have a point? Certainly, says Facebook whistleblower Frances Haugen, who even recalls one social network with bots accounting for half of its users.
Companies could easily get rid of fake accounts, Haugen tells Ian Bremmer on GZERO World, but then businesses would lose out.
"If you can have a 1% drop in the number of users on your site, and you can have a 10% drop in the valuation of your company, that's huge," she says, adding that this discourages taking down networks of bots.
Watch the GZERO World episode: Why social media is broken & how to fix it
- Why EU social media regulation matters to you - GZERO Media ›
- Hard Numbers: Musk threatens Twitter, Sri Lanka's president won't ... ›
- The Graphic Truth: Twitter doesn't rule the social world - GZERO Media ›
- Elon Musk to buy Twitter: will misinformation thrive? - GZERO Media ›
- Twitter's scent of Musk - GZERO Media ›
- Whistleblowers & how to activate a new era of digital accountability - GZERO Media ›
- Elon Musk wants a way out of Twitter - GZERO Media ›
The AI addiction cycle
Ever wonder why everything seems to be a major crisis these days? For former Google CEO Eric Schmidt, it's because artificial intelligence has determined that's the only way to get your attention.
What's more, it's driving an addiction cycle among humans that will lead to enormous depression and dissatisfaction.
"Oh my God there's another message. Oh my God, there's another crisis. Oh my God, there's another outrage. Oh my God. Oh my God. Oh my God. Oh my God," he says. "I don't think humans, at least in modern society where [we’ve] evolved to be in an 'Oh my God' situation all day."
Schmidt admits he failed to predict AI-enabled algorithms would lead to this addiction cycle. And the solution, he believes, is for people other than computer scientists to get involved in discussing the ethics of AI systems.
Watch his interview with Ian Bremmer on GZERO World:Be more worried about artificial intelligence
- Who'll rule the digital world in 2022? - GZERO Media ›
- Artificial intelligence from Ancient Greece to 2021 - GZERO Media ›
- Be more worried about artificial intelligence - GZERO Media ›
- Ian Bremmer explains: Should we worry about AI? - GZERO Media ›
- Beware AI's negative impact on our world, warns former Google CEO Eric Schmidt - GZERO Media ›
- Ian Bremmer: How AI may destroy democracy - GZERO Media ›
- Ian Bremmer: How AI may destroy democracy - GZERO Media ›
Eric Schmidt: We're not ready for what AI may do to us
Artificial intelligence is a reality. But its future impact on us is a big question mark.
For former Google CEO Eric Schmidt, the problem is that AI learns as it goes, a combination we've never seen before.
So, how will we co-exist with AI?
Schmidt says the only solution is for historians, economists, and social experts to join computer scientists in the discussion — before it's too late.
Watch his interview with Ian Bremmer on GZERO World:Be more worried about artificial intelligence
Be more worried about artificial intelligence
As we spend more time online and looking at our screens, we're increasingly living in a digital world. But we don't always know who runs it.
Tech companies are writing the rules — through computer algorithms powered by artificial intelligence. The thing is, Big Tech may have set something in motion it doesn't fully understand, nor control.
On this episode of GZERO World, Ian Bremmer talks to former Google CEO Eric Schmidt, who believes we need to control AI before it controls us.
What's troubling about AI, he says, is that it’s like nothing we’ve seen before. Instead of being precise, AI — like humans — learns by doing.
China is already doing pretty scary stuff with AI, like surveillance of Uyghurs in Xinjiang. For Schmidt, that's because the Chinese have a different set of values, which he doesn't want to influence the AI that for instance controls the algorithms of TikTok.
Yet, he blames algorithms, not China, for the polarization on social media. Schmidt is all for free speech, but not for robots.
Schmidt also worries about AI exacerbating existent problems like anxiety. Everything becomes a crisis because that's the only way to get people's attention. Tech created by humans is now driving a human addiction cycle that ultimately leads to depression.
Schmidt says we need to debate how we live with AI before the tech gets so fast, so smart that it can decide things that affect us all — before we even know we had a choice.
Subscribe to GZERO Media's YouTube channel to get notifications when new episodes are published.
- Ian Bremmer explains: Should we worry about AI? - GZERO Media ›
- Kai-fu Lee: What's next for artificial intelligence? - GZERO Media ›
- Artificial intelligence from Ancient Greece to 2021 - GZERO Media ›
- Why social media is broken & how to fix it - GZERO Media ›
- Ian Bremmer: How AI may destroy democracy - GZERO Media ›
- Ian Bremmer: How AI may destroy democracy - GZERO Media ›
- AI at the tipping point: danger to information, promise for creativity - GZERO Media ›
- The AI arms race begins: Scott Galloway’s optimism & warnings - GZERO Media ›
- Ian Explains: The dark side of AI - GZERO Media ›
- Podcast: Protecting the Internet of Things - GZERO Media ›
- The banking crisis, AI & Ukraine: Larry Summers weighs in - GZERO Media ›
- The geopolitics of AI - GZERO Media ›
- The AI power paradox: Rules for AI's power - GZERO Media ›
- Artificial intelligence: How soon will we see meaningful progress? - GZERO Media ›
- Ian Bremmer: Algorithms are now shaping human beings' behavior - GZERO Media ›
Podcast: We have to control AI before it controls us, warns former Google CEO Eric Schmidt
Listen: Tech companies set the rules for the digital world through algorithms powered by artificial intelligence. But does Big Tech really understand AI? Former Google CEO Eric Schmidt tells Ian Bremmer that we need to control AI before it controls us.
What's troubling about AI, he says, is that it’s still very new, and AI is learning by doing. Schmidt, co-author of “The Age of AI: And Our Human Future,” worries that AI exacerbates problems like anxiety, driving a human addiction cycle that leads to depression.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.- Kai-fu Lee: What's next for artificial intelligence? - GZERO Media ›
- Ian Bremmer explains: Should we worry about AI? - GZERO Media ›
- Podcast: The future of artificial intelligence with tech CEO Kai-Fu ... ›
- Artificial intelligence from Ancient Greece to 2021 - GZERO Media ›
- Beware AI's negative impact on our world, warns former Google CEO Eric Schmidt - GZERO Media ›
- Podcast: Why Scott Galloway is “cautiously optimistic” about AI - but not TikTok or Meta - GZERO Media ›
- Podcast: How to get social media companies to protect users (instead of hurting them) - GZERO Media ›