Search
AI-powered search, human-powered content.
scroll to top arrow or icon

The FEC kicks AI down the road

The FEC kicks AI down the road
text
Photo by Erik Mclean on Unsplash
The US Federal Election Commission will not regulate deepfakes in political ads before November’s elections. Last week, Republican commissioners effectively killed the proposal to do so, writing in a memo that they believed such rulemaking would exceed the commission’s authority under the law. Additionally, Chairman Sean Cooksey told Axios on Aug. 8 that the FEC will not consider additional rules before the election.

That means that the job of keeping deepfakes out of political ads will largely fall to tech platforms and AI developers. Tech companies signed an agreement at the Munich Security Conference in February, vowing to take “reasonable precautions” to prevent their AI tools from being used to disrupt elections. It’s also a task that could potentially fall to broadcasters. The Federal Communications Commission is still considering new rules for AI-generated content in political ads on broadcast television and radio stations. That’s caused tension between the two agencies too: The FEC doesn’t believe the FCC has the statutory authority to act, but the FCC maintains that it does.

After a deepfake version of Joe Biden’s voice was used in a robocall in the run-up to the New Hampshire Democratic primary, intended to trick voters into staying home, the FCC asserted that AI-generated robocalls were illegal under existing law. But time is ticking for further action since other AI-manipulated media may not be covered currently under the law. At this point, it seems likely that serious regulation from either agency might only come after Donald Trump and Kamala Harris square off in November — and perhaps only if Harris wins, as another Trump presidency might mean a further rollback of election rules.

GZEROMEDIA

Subscribe to GZERO's daily newsletter