Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Rebuilding post-election trust in the age of AI
In a GZERO Global Stage discussion at the 7th annual Paris Peace Forum, Teresa Hutson, Corporate Vice President at Microsoft, reflected on the anticipated impact of generative AI and deepfakes on global elections. Despite widespread concerns, she noted that deepfakes did not significantly alter electoral outcomes. Instead, Hutson highlighted a more subtle effect: the erosion of public trust in online information, a phenomenon she referred to as the "liar's dividend."
"What has happened as a result of deepfakes is... people are less confident in what they're seeing online. They're not sure. The information ecosystem is a bit polluted," Hutson explained. She emphasized the need for technological solutions like content credentials and content provenance to help restore trust by verifying the authenticity of digital content.
Hutson also raised concerns about deepfakes targeting women in public life with non-consensual imagery, potentially deterring them from leadership roles. Looking ahead, she stressed the importance of mitigating harmful uses of AI, protecting vulnerable groups, and establishing appropriate regulations to advance technology in trustworthy ways.
This conversation was presented by GZERO in partnership with Microsoft at the 7th annual Paris Peace Forum. The Global Stage series convenes heads of state, business leaders, and technology experts from around the world for critical debates about the geopolitical and technological trends shaping our world.
Follow GZERO coverage of the Paris Peace Forum here: https://www.gzeromedia.com/global-stage
South Korea banned deepfakes. Is that a realistic solution for the US?
On Sept. 26, South Korea revised its law that criminalizes deepfake pornography. Now, it’s not just illegal to create and distribute this lewd digital material, but also to view it. Anyone found to possess, save, or even watch this content could face up to three years in jail or a $22,000 fine.
Deepfakes are AI mashups in which a person’s face or likeness is superimposed onto explicit content without their consent. It’s an issue that’s afflicted celebrities like Taylor Swift, but also private individuals targeted by people they know.
South Korea’s law is a particularly aggressive approach to combating a serious issue. It’s also a problem that’s much older than artificial intelligence itself. Fake nude images have been created with print photo cutouts as far back as the 19th century, but they have flourished in the computer age with PhotoShop and other photo-editing tools. And it’s a problem that’s only been supercharged by the rise and widespread availability of deep learning models in recent years. Deepfakes can be weaponized to embarrass, blackmail, or hurt people — typically, women — whether they’re famous or not.
While South Korea’s complete prohibition may seem attractive to those desperate to eliminate deepfakes, experts warn that such a ban — especially on viewing the material — is difficult to enforce and likely wouldn’t pass legal muster in the United States.
“I think some form of regulation is definitely needed in this space, and South Korea's approach is very comprehensive,” says Valerie Wirtschafter, a fellow at the Brookings Institution. “I do think it will be difficult to fully enforce just due to the global nature of the internet and the widespread availability of VPNs.”
In the US, at least 20 states have already passed laws addressing nonconsensual deepfakes, but they’re inconsistent. “Some are criminal in nature, others only allow for civil penalties. Some apply to all deepfakes, others only focus on deepfakes involving minors,” says Kevin Goldberg, a First Amendment specialist at the Freedom Forum.
“Creators and distributors take advantage of these inconsistencies and often tailor their actions to stay just on the right side of the law,” he adds. Additionally, many online abuses happen across state lines — if not across national borders — making state laws difficult to sue under.
Congress has introduced bills to tackle deepfakes, but none have yet passed. The Defiance Act, championed by Rep. Alexandria Ocasio-Cortez and Sens. Dick Durbin and Lindsey Graham, would create a civil right to action, allowing victims to sue people who create, distribute, or receive nonconsensual deepfakes. It passed the Senate in July but is still pending in the House.
But a full prohibition on sexually explicit deepfakes would likely run afoul of the First Amendment, which makes it very difficult for the government to ban speech — including explicit media.
“A similar law in the United States would be a complete nonstarter under the First Amendment,” says Corynne McSherry, legal director at the Electronic Frontier Foundation. She thinks that current US law should protect Americans from some harms of deepfakes, much of which could be defamatory, an invasion of privacy, or violate citizens’ rights to publicity.
Many states, including California, have a right of publicity law that allows individuals to sue if their likeness is being used without their consent, especially for commercial purposes. For a new law to take action on deepfakes and pass First Amendment scrutiny, it would need to be narrowly tailored to address a very specific harm without infringing on protected speech, something that McSherry says would be very hard to do.
Despite the tricky First Amendment challenges, there is growing recognition of the need for some form of regulation, Wirtschafter says. “It is one of the most pernicious and damaging uses of generative AI, and it disproportionately targets women.”
Employing AI fraud: Fake job applicants and fake employers
For one, employment scams surged in 2023, up 118% from the year prior, according to the Identity Theft Resource Center — largely due to the rise of AI. Scammers often pose as recruiters, advertising fake jobs to entice victims to cough up personal information. In 2022, consumers told the US Federal Trade Commission that they lost $367 million to these kinds of scams. And that was largely before the generative AI boom.
On the other side, real businesses are also wary of fake job applicants who can take advantage of remote work policies to interview and even get hired in order to steal money, collect an unearned salary, or gain access to company information. In 2022, the FBI reported an uptick in complaints regarding the use of deepfakes and stolen personal information to apply for remote work positions. “In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking,” the FBI warned. “At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.”
Two years later, the technology is only more sophisticated, with more convincing text generation, text-to-speech tools, deepfake audio, and personal avatars. AI tools, even if intended to make life and business easier for people and companies, can easily be weaponized by bad actors.
Ben Cardin’s deepfake debacle
US Sen. Ben Cardin, a Democrat from Maryland, recently joined a videoconference with a top Ukrainian official. The only problem? It was a deepfake.
Cardin believed he was speaking with former Ukrainian Foreign Minister Dmytro Kuleba, who wanted to chat over Zoom. But according to the New York Times, Cardin grew suspicious when the person posing as Kuleba began asking questions about politics, the upcoming election, and sensitive foreign policy questions. He asked Cardin whether he supported firing long-range missiles into Russia, for instance. Cardin ended the call, reported it to the State Department, and officials at State told him it was a deepfake. It’s not yet clear who was behind the artificial intelligence mask, which looked and sounded like Kuleba.
Senate security officials warned lawmakers and their aides after the incident. “While we have seen an increase of social engineering threats in the last several months and years, this attempt stands out due to its technical sophistication and believability,” they wrote, cautioning that similar incidents could arise in the future, especially ahead of the November elections.Deepfake disclosure is coming for politics
With the 2024 election looming, the US Federal Communications Commission has proposed a first-of-its-kind rule requiring disclosure of AI-generated content in political ads on TV and radio.
The proposal came last Thursday, just a day before billionaire Elon Musk shared a video featuring an AI-generated voiceimpersonating Vice President Kamala Harris, now the presumed Democratic nominee for president, on his social media platform, X. In the video, the fake Harris voice calls Biden “senile” and refers to herself as the “ultimate diversity hire.”
While about 20 states haveadopted laws regulating AI in political content, and many social media companies like Meta havebanned AI-generated political ads outright, there’s still no comprehensive federal regulation about this content.
That said, after voters in New Hampshire received robocalls with an AI-generated robocall of Joe Biden telling them not to vote in the state’s primary, the FCCclarified that AI-generated robocalls are illegal under an existing statute.
The FCC rule, if passed, would require any political ads run on broadcast television and radio stations to disclose if they’re created with artificial intelligence. That wouldn’t ban these ads outright — and certainly wouldn’t slow their spread on social media — but it would be a first step for the government in cracking down on AI in politics.
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
Deepfake recordings make a point in Georgia
A Georgia lawmaker used a novel approach to help pass legislation to ban deepfakes in politics: he used a deepfake. Republican state representative Brad Thomas used an AI-generated recording of two of his bills opponents—state senator Colton Moore and activist Mallory Staples—endorsing the bill.
Thomas presented the convincing audio to his peers, but cautioned that he made this fake recording on the cheap: “The particular one we used is, like, $50. With a $1,000 version, your own mother wouldn’t be able to tell the difference,” he said. The bill subsequently passed out of committee by an 8-1 vote.
Fake audio like this recently reared its head in US politics on the national level when an ally of then-Democratic presidential candidate Dean Phillips released a fake robocall of President Joe Biden telling New Hampshire voters to stay home during the state’s primary. The Federal Communications Commission moved quickly in the aftermath of this incident to declare that AI-generated robocalls are illegal under federal law.Voters beware: Elections and the looming threat of deepfakes
With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.
Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.
“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.
“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.
Watch the full conversation here: How to protect elections in the age of AI
- Hard Numbers: Bukele 2024, German troops in Lithuania, Manipur unrest, Chinese deepfake scam ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Combating AI deepfakes in elections through a new tech accord ›
- Can we use AI to secure the world's digital future? - GZERO Media ›
- Protecting science from rising populism is critical, says UNESCO's Gabriela Ramos - GZERO Media ›