The Washington Post’s technology columnist, Geoffrey Fowler, recently asked 2024 US presidential candidates to take an "AI Pledge" promising to:
- Label any communication made with generative AI tools.
- Not use AI to misrepresent what a competitor has done or said.
- Not use AI to misrepresent what you have done or said.
- Not use AI to confuse people about how to vote.”
AI-generated media can be innocuous: Take that image of Pope Francis looking fresh in a white puffer coat, which went viral earlier this year. But it could also be dangerous — experts have warned for years that deepfakes and other synthetic media could cause mass chaos or disrupt elections if wielded maliciously and believed by enough people. It could, in other words, supercharge an already-pervasive disinformation problem.
We’ve not reached that point yet, but AI has already crept into domestic politicking this year. In April, the Republican National Committee ran an AI-generated ad depicting a dystopian second presidential term for Joe Biden. In July, Florida governor and presidential hopeful Ron DeSantis used an artificially generated Donald Trump voice in an attack ad against his opponent.
There’s been some backlash: Google recently mandated that political ads provide written disclosure if AI is used, and a group of US senators would like to sign a similar mandate into law. But until then, perhaps a pledge like Fowler’s could offer some baseline assurance that cutting-edge technology won’t be used by America’s most powerful people for anti-democratic means. We already have enough people doubting free and fair elections without the influence of AI.
No candidates have taken Fowler’s pledge, but it got one key endorsement from Senate Majority Leader Chuck Schumer. “Maybe most candidates will make that pledge,” Schumer said. “But the ones that won’t will drive us to a lower common denominator … If we don’t have government-imposed guardrails, the lowest common denominator will prevail.”