Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Europol headquarters in The Hague, Netherlands.
Europe’s AI deepfake raid
Europol, the European Union’s law enforcement agency, arrested 24 people across 19 countries last Wednesday in a global crackdown on AI-generated child pornography. The arrests stretched beyond the EU with suspects taken into custody in Australia, the United Kingdom, and New Zealand in coordination with local police.
The crackdown is part of a campaign called Operation Cumberland, which began in November with the arrest of a lead suspect in Denmark. The ringleader allegedly ran a website where people paid to access images of children that he created with help from artificial intelligence. Europol wrote in a press release that there are 273 total suspects, and they’ve done 33 house searches and seized 173 electronic devices.
“Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material (CSAM), making it exceptionally challenging for investigators, especially due to the lack of national legislation addressing these crimes,” Europol wrote in a statement.
The agency noted that EU member states are currently discussing regulations specifically addressing this type of content, so it’s unclear what the exact legal basis for the arrests is. (Europol did not respond to a request for comment by press time.) Nick Reiners, a senior geo-technology analyst at Eurasia Group, said he believes the legal basis would be national laws that do not distinguish CSAM from AI-generated CSAM. That said, there’s good reason for a new EU law: “The objective of the proposed new Directive is primarily to harmonize, update and strengthen national laws across EU member states, in part to make it easier to prosecute,” Reiners added.
The agency has said that more arrests are on the way in the coming weeks.
Cabs drive along Westminster Bridge in front of the British Parliament with the Elizabeth Tower and the famous Big Ben bell.
Britain unveils new child deepfake law
The United Kingdom is set to unveil the world’s first national law criminalizing the use of artificial intelligence tools for generating child sex abuse material, or CSAM.
Home Secretary Yvette Cooper said in a Sunday BBC interview that AI is leading to “online child abuse on steroids.” A series of four laws will, among other things, make it illegal to possess, create, or distribute AI tools designed to make CSAM, which would carry a maximum five-year prison sentence. The government will also criminalize running websites where abusers can share this material or advice about cultivating it.
The Internet Watch Foundation, which focuses on eliminating CSAM on the internet, issued a new report on Sunday showing that AI-generated CSAM found online has quadrupled over the past year.
The United States criminalizes CSAM, but there’s some gray area about whether AI-generated content is treated the same under federal law. In 2024, 18 states passed laws specifically outlawing AI-generated CSAM, but so far there’s no federal law on the books.A computer keyboard with a blue light on it.
US Justice Department vows to bring more cases against AI-generated CSAM
Federal prosecutors at the US Department of Justice are cracking down on AI-generated child sexual abuse material, or CSAM. James Silver, who leads the department’s Computer Crime and Intellectual Property Section, told Reuters “there’s more to come” following two criminal cases earlier this year.
“What we’re concerned about is the normalization of this,” Silver said. “AI makes it easier to generate these kinds of images, and the more [them] out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”
In one such case announced in May, a Wisconsin man was arrested and charged with using Stable Diffusion’s text-to-image model to create and distribute AI-generated CSAM. He allegedly sent the images to a minor too. “Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children,” said Deputy Attorney General Lisa Monaco at the time.
There may be legal complexity in prosecuting some of these cases. The First Amendment does not protect child pornography, but when there’s not an identifiable child in the images in question, prosecutors might have to get creative — likely charging obscenity law violations, which are more subjective.
In a 2002 case, Ashcroft v. Free Speech Coalition, a federal judge struck down part of a congressional statute for being overly broad because it prohibited “any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture [that] is, or appears to be, of a minor engaging in sexually explicit conduct.” In the US, restrictions on speech need to be extremely specific and narrowly tailored to address an issue, or they won’t stand up in court. That legal precedent could place additional strain on prosecutors trying to demonstrate that AI-generated media should not be allowed.EncroChat and Europol logos are seen in this illustration taken, June 27, 2023.
Europol’s CSAM problem
Europol, Europe’s policing agency, said it’s seen a marked increase in AI-generated child sexual abuse material, aka CSAM. And they predicted it’ll get worse: “The use of AI which allows child sex offenders to generate or alter child sex abuse material is set to further proliferate in the near future," the agency said in a statement. The technology also makes it easier for perpetrators to cyberbully and sexually extort victims for financial gain.
In a new report, Europol warns of the dangers of deepfake child abuse material. But it also says that the advent of AI makes it difficult to detect what’s real and what’s fake. And AI-generated images could be trained on real CSAM. Massive AI training datasets have been found to include numerous instances of CSAM.