Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Deepfake disclosure is coming for politics
With the 2024 election looming, the US Federal Communications Commission has proposed a first-of-its-kind rule requiring disclosure of AI-generated content in political ads on TV and radio.
The proposal came last Thursday, just a day before billionaire Elon Musk shared a video featuring an AI-generated voiceimpersonating Vice President Kamala Harris, now the presumed Democratic nominee for president, on his social media platform, X. In the video, the fake Harris voice calls Biden “senile” and refers to herself as the “ultimate diversity hire.”
While about 20 states haveadopted laws regulating AI in political content, and many social media companies like Meta havebanned AI-generated political ads outright, there’s still no comprehensive federal regulation about this content.
That said, after voters in New Hampshire received robocalls with an AI-generated robocall of Joe Biden telling them not to vote in the state’s primary, the FCCclarified that AI-generated robocalls are illegal under an existing statute.
The FCC rule, if passed, would require any political ads run on broadcast television and radio stations to disclose if they’re created with artificial intelligence. That wouldn’t ban these ads outright — and certainly wouldn’t slow their spread on social media — but it would be a first step for the government in cracking down on AI in politics.
New rules for political ads
Meta wasn’t the first Big Tech platform to introduce such a policy. Google, the largest advertising company in the world, announced a similar stance in September. But Facebook and Google are encouraging ad agencies and small companies to use their generative AI tools for commercial marketing campaigns – drawing a line in the sand when it comes to politics.
Proposed legislation: In doing so, they’re likely front-running future regulation. In May, a group of US senators including Amy Klobuchar introduced legislation to mandate disclosures when AI technology is used in political ads. Klobuchar and Rep. Yvette Clarke, the bill’s House sponsor, sent letters to Meta — along with X, formerly known as Twitter — asking why they were not requiring these disclosures. One month later, Meta voluntarily implemented the policy.
In September, Klobuchar introduced a separate bipartisan bill with Republican Sen. Josh Hawley aimed at explicitly prohibiting the use of AI-generated content in political ads. In October, she told Bloomberg she thinks there’s a “good chance” Congress will pass legislation regulating AI in political ads before the end of the year.
Will voters care? The issue seems to have gained support among the public too: A recent poll by Axios and Morning Consult found that 78% of Americans surveyed think political advertisers who use AI should be “required to disclose how AI was used to create the ad.” By contrast, only 64% said the same when it came to how AI is used “in professional spaces.” So if candidates play fast and loose with generative AI in their campaigns, it could backfire on them at the voting booth.