Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
US Justice Department vows to bring more cases against AI-generated CSAM
Federal prosecutors at the US Department of Justice are cracking down on AI-generated child sexual abuse material, or CSAM. James Silver, who leads the department’s Computer Crime and Intellectual Property Section, told Reuters “there’s more to come” following two criminal cases earlier this year.
“What we’re concerned about is the normalization of this,” Silver said. “AI makes it easier to generate these kinds of images, and the more [them] out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”
In one such case announced in May, a Wisconsin man was arrested and charged with using Stable Diffusion’s text-to-image model to create and distribute AI-generated CSAM. He allegedly sent the images to a minor too. “Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children,” said Deputy Attorney General Lisa Monaco at the time.
There may be legal complexity in prosecuting some of these cases. The First Amendment does not protect child pornography, but when there’s not an identifiable child in the images in question, prosecutors might have to get creative — likely charging obscenity law violations, which are more subjective.
In a 2002 case, Ashcroft v. Free Speech Coalition, a federal judge struck down part of a congressional statute for being overly broad because it prohibited “any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture [that] is, or appears to be, of a minor engaging in sexually explicit conduct.” In the US, restrictions on speech need to be extremely specific and narrowly tailored to address an issue, or they won’t stand up in court. That legal precedent could place additional strain on prosecutors trying to demonstrate that AI-generated media should not be allowed.Europol’s CSAM problem
Europol, Europe’s policing agency, said it’s seen a marked increase in AI-generated child sexual abuse material, aka CSAM. And they predicted it’ll get worse: “The use of AI which allows child sex offenders to generate or alter child sex abuse material is set to further proliferate in the near future," the agency said in a statement. The technology also makes it easier for perpetrators to cyberbully and sexually extort victims for financial gain.
In a new report, Europol warns of the dangers of deepfake child abuse material. But it also says that the advent of AI makes it difficult to detect what’s real and what’s fake. And AI-generated images could be trained on real CSAM. Massive AI training datasets have been found to include numerous instances of CSAM.