Search
AI-powered search, human-powered content.
scroll to top arrow or icon

US Justice Department vows to bring more cases against AI-generated CSAM

US Justice Department vows to bring more cases against AI-generated CSAM

A computer keyboard with a blue light on it.

Federal prosecutors at the US Department of Justice are cracking down on AI-generated child sexual abuse material, or CSAM. James Silver, who leads the department’s Computer Crime and Intellectual Property Section, told Reuters “there’s more to come” following two criminal cases earlier this year.


“What we’re concerned about is the normalization of this,” Silver said. “AI makes it easier to generate these kinds of images, and the more [them] out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”

In one such case announced in May, a Wisconsin man was arrested and charged with using Stable Diffusion’s text-to-image model to create and distribute AI-generated CSAM. He allegedly sent the images to a minor too. “Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children,” said Deputy Attorney General Lisa Monaco at the time.

There may be legal complexity in prosecuting some of these cases. The First Amendment does not protect child pornography, but when there’s not an identifiable child in the images in question, prosecutors might have to get creative — likely charging obscenity law violations, which are more subjective.

In a 2002 case, Ashcroft v. Free Speech Coalition, a federal judge struck down part of a congressional statute for being overly broad because it prohibited “any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture [that] is, or appears to be, of a minor engaging in sexually explicit conduct.” In the US, restrictions on speech need to be extremely specific and narrowly tailored to address an issue, or they won’t stand up in court. That legal precedent could place additional strain on prosecutors trying to demonstrate that AI-generated media should not be allowed.

GZEROMEDIA

Subscribe to GZERO's daily newsletter