Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Deepfake videos are a possible election threat
Some startups like Runway and Pika have made AI video models available to the public, but video generation as a whole has further to go than image and text generation. There are more visual clues in videos that can show something isn’t 100% authentic: blurry spots, lag, discontinuity between frames, object impermanence, or other visual oddities are often present.
We’ve seen deepfake images and audio of Donald Trump and Joe Biden, AI voices in Pakistan, and AI avatars in Indonesia. While deepfake videos haven’t yet been prevalent, they’re almost certainly the next frontier. Ahead of an interview with Sen. Amy Klobuchar, CNN anchor Jake Tapperdeployed a convincing deepfake version of himself, and Miles Taylor, the former Department of Homeland Security chief of staff, warned in Time Magazine that a deefpake video could be this election season’s “October surprise.”
If a deepfake video doesn’t sow chaos during the upcoming US election, it’s almost sure to disrupt an election somewhere in the world very soon.
How the UN is combating disinformation in the age of AI
Disinformation is running rampant in today’s world. The internet, social media, and AI — combined with declining trust in major institutions — have created an ecosystem ripe for exploitation by nefarious actors aiming to spread false and hateful narratives. Meanwhile, governments worldwide are struggling to get big tech companies to take substantive steps to combat disinformation. And at the global level, the UN’s priorities are also being hit hard by these trends.
“We can't bring about and generate stability in fragile environments if populations are turning against our peacekeepers as a result of lies being spread against them online. We can't make progress on climate change if people are being led to believe first of all, that maybe it doesn't even exist, or that it's not as bad as they thought, or that it's actually too late and there's nothing that they can do about it,” Melissa Fleming, the UN's Under-Secretary-General for Global Communications, told GZERO in a conversation at the SDG Media Zone during the 79th UN General Assembly.
“The UN alone cannot tackle these problems without civil society, without people. And the people are what drives political agendas. So it's really important for us to work on our information ecosystems together,” Fleming added.
Though Fleming said that many in the UN are excited by AI's myriad potential benefits, she also emphasized the serious problems it’s already posing in terms of accelerating the spread of disinformation—particularly via deepfakes.
“We've spent a lot of time also trying to educate the public on how to spot misinformation and disinformation and how to tell if a photo is real or if it is fake. In the AI information age, that's going to become nearly impossible,” Fleming said.
“So we're calling on AI actors to really create safety by design, and don't leave it only to the users to be able to try to figure out how to navigate this. They are designing these instruments, and they can be part of the solution,” she added.
The FEC kicks AI down the road
That means that the job of keeping deepfakes out of political ads will largely fall to tech platforms and AI developers. Tech companies signed an agreement at the Munich Security Conference in February, vowing to take “reasonable precautions” to prevent their AI tools from being used to disrupt elections. It’s also a task that could potentially fall to broadcasters. The Federal Communications Commission is still considering new rules for AI-generated content in political ads on broadcast television and radio stations. That’s caused tension between the two agencies too: The FEC doesn’t believe the FCC has the statutory authority to act, but the FCC maintains that it does.
After a deepfake version of Joe Biden’s voice was used in a robocall in the run-up to the New Hampshire Democratic primary, intended to trick voters into staying home, the FCC asserted that AI-generated robocalls were illegal under existing law. But time is ticking for further action since other AI-manipulated media may not be covered currently under the law. At this point, it seems likely that serious regulation from either agency might only come after Donald Trump and Kamala Harris square off in November — and perhaps only if Harris wins, as another Trump presidency might mean a further rollback of election rules.Dreams of a dancing Modi
A video circulating on social media shows Indian Prime Minister Narendra Modidressed stylishly and dancing to a Bollywood song while another shows his political rival Mamata Banerjee in a similar setting, though there’s a political speech of hers playing in the background. Are India’s political leaders getting down on the dancefloor to drive voters to the polls in ongoing elections? Nope — both were created with artificial intelligence.
While Modi made light of his, calling such creativity a “a delight,” the video of Banerjee, which featured parts of a speech in which she criticized those who have left her party for Modi’s, elicited a different response: Indian police said it could “affect law and order,” and they are investigating. One Kolkata cybercrime officer warned the X user who posted the Banerjee video that they could be “liable for strict penal action.” Still, the user told Reuters they are not deleting the video and don’t believe the police can trace their anonymous account.
The videos were made with Viggle, a free online service, showing that even cheap or free tools can cause a major stir in global politics.
The Indian government has been selective about when it embraces artificial intelligence, positioning itself as a leader in the technology while also cracking down on uses that offend the sensibilities of its right-wing government. Late last year, the government even considered asking Meta to break WhatsApp’s encryption to identify who created and circulated deepfake videos of politicians. Perhaps Modi’s regime can make India into a destination for AI companies — if it doesn’t keep shooting itself in the foot when it feels threatened.Gaza protests, union negotiations, and deepfakes: Is the Met Gala a microcosm of the times?
Last night, the Metropolitan Museum of Art rolled out the red carpet for the Met Gala — a star-studded fundraiser hosted by media giant Condé Nast — amid pro-Palestinian protests, union negotiations, and deepfake dresses.
Gaza protests: As celebrities took to the red carpet Monday night, police struggled to contain hundreds of pro-Palestinian protesters marching down Fifth Avenue to protest the event. Many of the demonstrators came from Hunter College in an evolution of the campus protests that have swept the country – and likely a harbinger of things to come after students leave campus this summer but still strive to make their voices heard.
Union negotiations: Just 12 hours earlier, Condé Nast reached an agreement with unionized employees who were threatening to abandon their jobs at the event if they did not reach an agreement in long-stalled contract negotiations. In a post on X, the union warned on Saturday night that management could “meet us at the table or meet us at the Met on Monday.” The agreement continues a year of union wins and includes wage increases, additional parental leave, and hybrid work protections.
Deepfakes: Meanwhile, many of us who didn’t pay $75,000 for a seat and were watching the red carpet online were bamboozled by a deepfake of Katy Perry in two dresses, both generated by AI. Perry did not attend the gala, but if you were fooled by the deepfake, don't feel too bad; her own mother was too.
The Met Gala is often criticized for being a pedestal for the out-of-touch, but this time, even the force of the mighty Anna Wintour couldn’t insulate the event from the outside world.
Alleged AI crime rocks Maryland high school
Dazhon Darien, a former athletic director at Pikesville High School in Baltimore County, Maryland, was arrested on April 25 and charged with a litany of crimes related to using AI to frame the school's principal. Darien allegedly created a fake AI voice of Principal Eric Eiswert, used it to generate racist and antisemitic statements, and posted the audio on social media in January. Eiswert was temporarily removed from the school after the audio emerged.
The police allege that Darien used the school’s internet to search for AI tools and sent emails about the recording. The audio was then sent to and posted by a popular Baltimore-area Instagram account on Jan. 17. It’s unclear which tool was used to make the recording, but digital forensics experts said it was clearly fake.
At least 10 states have some form of deepfake laws, though some are focused on pornography. Still, AI-specific charges are rare in the US. Darien was charged with disrupting school activities, theft, retaliation against a witness, and stalking.
Deepfake audio has become a major problem in global elections, but this story demonstrates it can also easily weaponize person-to-person disputes.
AI and Canada's proposed Online Harms Act
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
- When AI makes mistakes, who can be held responsible? ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- EU AI regulation efforts hit a snag ›
- Online violence means real-world danger for women in politics - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
Voters beware: Elections and the looming threat of deepfakes
With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.
Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.
“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.
“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.
Watch the full conversation here: How to protect elections in the age of AI
- Hard Numbers: Bukele 2024, German troops in Lithuania, Manipur unrest, Chinese deepfake scam ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Combating AI deepfakes in elections through a new tech accord ›
- Can we use AI to secure the world's digital future? - GZERO Media ›
- Protecting science from rising populism is critical, says UNESCO's Gabriela Ramos - GZERO Media ›