There are 21 days until Election Day in the United States — and voters in numerous states have already begun early voting. So far, artificial intelligence applications have had minimal effects on the election, though it’s reared its head a few times.
During this US election cycle, generative AI has been used in an RNC ad, a fraudulent Joe Biden robocall for New Hampshire voters, and deepfake photos of Taylor Swift endorsing Donald Trump.
Microsoft and OpenAI say they’ve disrupted foreign influence campaigns from China, Iran, and Russia seeking to sow discord in the US, including around hot-button political issues such as Israel’s war with Gaza.
While malicious actors haven’t yet used AI tools in very novel ways, the technology has made it easier, quicker, and cheaper to generate online propaganda and disseminate it over social media. In Indonesia, for example, notorious Defense Minister Prabowo Subianto used a chubby-cheeked, friendly AI-generated avatar to appeal to voters in the presidential election. In Pakistan, Imran Khan used AI voice cloning to spread his political message and support his party’s candidates from prison.
Now, with the US election looming, there’s a very real possibility of a more malicious and effective AI campaign targeting Americans. So GZERO AI asked experts what they’re most concerned about in the run-up to Nov. 5. Their overriding concern revolved around misinformation – and how AI is used to create and distribute it – impacting whether and how people vote.
Valerie Wirtschafter, a fellow in the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, for example, said she was concerned by the onslaught of generative AI images circulating on social media in the aftermath of Hurricane Helene — including ones alleging that the Biden administration wasn’t doing enough to support residents affected by the storm.
“These images were clearly AI, and when pointed out as such, the response was a simple shrug – that the images resonated because they ‘felt accurate’ anyways,” she said.
There hasn’t been any new federal legislation in the US regarding AI use around elections, and the Federal Election Commission recently chose to forgo new rulemaking on the matter ahead of November. That said, OpenAI, Anthropic, and most major AI companies have self-regulated, instituting rules preventing users from using their tools to generate election-related materials, such as images of presidential candidates. Many of them will refuse to provide voting information as well. That said, many of these rules are porous.
Wirtschafter said she’s most concerned about AI-generated media — particularly audio — being used not to affect how people vote but rather if people vote. Audio-generated content, she said, could be used to “try to prevent a targeted but vital subset of the population from voting” or “sow confusion about where and how to vote,” she said.
“While swing states have prepared for this possibility, it is still such a difficult task, and AI-generated content is most impactful at the local and highly targeted level.”
Scott Bade, a senior analyst in Eurasia Group’s geo-technology practice, said he’s concerned not only by the use of generative AI in the lead-up to the election but also by how politicians might invoke the technology to help cast doubt on things that are, in fact, true.
Like Wirtschafter, Bade said he’s most worried about anything that “muddies the waters and creates fear and confusion that can suppress votes on election day.”
But the threat won’t end after Americans go to the polls. The 2020 election and aftermath showed how conspiracy theories abound even without generative AI.
Politicians, especially those aligned with Trump, falsely claimed there was widespread voter fraud. Bade warned that AI might be used to affect how voters feel about the “sanctity of the ballot.”
So, what should we do about it? Around the elections, it’s important to keep an eye on the source of the materials you’re viewing, check government websites for reliable voting information, and take everything you hear or see in this age of AI with a grain of salt – even if it confirms your prior assumptions.
“This type of content can be obviously AI-generated but still ‘feel’ correct,” Wirtschafter said.