Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
US Justice Department vows to bring more cases against AI-generated CSAM
Federal prosecutors at the US Department of Justice are cracking down on AI-generated child sexual abuse material, or CSAM. James Silver, who leads the department’s Computer Crime and Intellectual Property Section, told Reuters “there’s more to come” following two criminal cases earlier this year.
“What we’re concerned about is the normalization of this,” Silver said. “AI makes it easier to generate these kinds of images, and the more [them] out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”
In one such case announced in May, a Wisconsin man was arrested and charged with using Stable Diffusion’s text-to-image model to create and distribute AI-generated CSAM. He allegedly sent the images to a minor too. “Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children,” said Deputy Attorney General Lisa Monaco at the time.
There may be legal complexity in prosecuting some of these cases. The First Amendment does not protect child pornography, but when there’s not an identifiable child in the images in question, prosecutors might have to get creative — likely charging obscenity law violations, which are more subjective.
In a 2002 case, Ashcroft v. Free Speech Coalition, a federal judge struck down part of a congressional statute for being overly broad because it prohibited “any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture [that] is, or appears to be, of a minor engaging in sexually explicit conduct.” In the US, restrictions on speech need to be extremely specific and narrowly tailored to address an issue, or they won’t stand up in court. That legal precedent could place additional strain on prosecutors trying to demonstrate that AI-generated media should not be allowed.Repercussions come for AI-generated child porn
The case is novel. It’s the first time that the federal government has brought charges for child porn fully generated by AI. The government said that Anderegg created a trove of 13,000 fake images using the text-to-image generator Stable Diffusion, made by the company Stability AI, along with certain add-ons to the technology. This isn’t the first blow-up involving Stable Diffusion, though. In December, Stanford University researchers found that the dataset LAION-5B, used by Stable Diffusion, included 1,679 illegal images of child sexual abuse material.
This case could set a new precedent for an open question: Is AI-generated child pornography — for all intents and purposes under the law — child pornography?