Search
AI-powered search, human-powered content.
scroll to top arrow or icon

Deepfake it till you make it

Prime Minister Narendra Modi greets people during the Hindustan Times Leadership Summit, in New Delhi, on Saturday, Nov. 4, 2023.

Prime Minister Narendra Modi greets people during the Hindustan Times Leadership Summit, in New Delhi, on Saturday, Nov. 4, 2023.

ANI via Reuters Connect
Contributing Writer
https://x.com/ScottNover
https://www.linkedin.com/in/scottnover/
AI-generated songs featuring the (fake) voice of Indian Prime Minister Narendra Modi are taking over Instagram. One video, a would-be Modi cover of a popular Bollywood song, was viewed 3.4 million times on Instagram Reels, India’s leading social video platform after it banned TikTok in 2020. The online magazine Rest of World notes that these songs, translated into India’s many regional languages, could break down a language barrier for the Hindi-speaking Modi ahead of the 2024 general election.

While deepfake technology is typically associated with deceit – tricking voters and disrupting democracy – this seems like a more innocuous way of influencing global politics. But the Indian government hasn’t embraced this technology: It recently considered drastic action to compel WhatsApp parent company Meta to break the app’s encryption and identify the creators of deepfake videos of politicians. Deepfakes could, in other words, have a tangible impact on the world’s largest democracy.

In the United States, it’s not just politicians who have clashed with AI over this brand of imitation, but celebrities too. According to a report in “Variety,” actress Scarlett Johansson has taken “legal action” against the app Lisa AI, which used a deepfake version of her image and voice in a 22-second ad posted on X. The law may be on Johansson’s side: California has a right of publicity law prohibiting someone’s name, image, or likeness from being used in an advertisement without permission. In their ongoing strike, Hollywood actors have also been bargaining over how the studios can, if at all, use their image rights with regard to AI.

Deepfake technology is only improving, making it ever more difficult to determine when a politician or celebrity is appearing before your eyes – and when it’s just a dupe. In his recent executive order on AI, President Joe Biden called for new standards for watermarking AI-generated media so people know what’s real and what’s computer generated.

That approach – akin to US consumer protections for advertising – has obvious appeal, but it might not be technically foolproof, experts say. What’s more likely is that the US court system will try to apply existing statutes to new technology, only to reveal (possibly glaring) gaps in the laws.

Generative AI and deepfakes have already crept into the 2024 election, including a Republican National Committee ad depicting a dystopian second Biden term. But look closely at the top-left corner of the ad toward the end, and you’ll notice the following disclosure: “Built entirely with AI imagery.” Surely, this won’t be the last we see of AI in this election – we’ll be keeping an eye out for all the ways it rears its head on the campaign trail.