With the 2024 election looming, the US Federal Communications Commission has proposed a first-of-its-kind rule requiring disclosure of AI-generated content in political ads on TV and radio.
The proposal came last Thursday, just a day before billionaire Elon Musk shared a video featuring an AI-generated voiceimpersonating Vice President Kamala Harris, now the presumed Democratic nominee for president, on his social media platform, X. In the video, the fake Harris voice calls Biden “senile” and refers to herself as the “ultimate diversity hire.”
While about 20 states haveadopted laws regulating AI in political content, and many social media companies like Meta havebanned AI-generated political ads outright, there’s still no comprehensive federal regulation about this content.
That said, after voters in New Hampshire received robocalls with an AI-generated robocall of Joe Biden telling them not to vote in the state’s primary, the FCCclarified that AI-generated robocalls are illegal under an existing statute.
The FCC rule, if passed, would require any political ads run on broadcast television and radio stations to disclose if they’re created with artificial intelligence. That wouldn’t ban these ads outright — and certainly wouldn’t slow their spread on social media — but it would be a first step for the government in cracking down on AI in politics.