Skip to content
POLICY

Feds test whether existing laws can combat surge in fake AI child sex images

Kids defenseless against AI-generated sex images as feds expand crackdown.

Story text
Cops aren't sure how to protect kids from an ever-escalating rise in fake child sex abuse imagery fueled by advances in generative AI. Last year, child safety experts warned of thousands of "AI-generated child sex images" rapidly spreading on the dark web around the same time the FBI issued a warning that "benign photos" of children posted online could be easily manipulated to exploit and harm kids. So far, US prosecutors have only brought two criminal cases in 2024 attempting to use existing child pornography and obscenity laws to combat the threat, Reuters reported on Thursday. Meanwhile, as young girls are increasingly targeted by classmates in middle and high schools, at least one teen has called for a targeted federal law designed to end the AI abuse. While it's hard to understand the full extent of the threat because kids often underreport sex abuse, the National Center for Missing and Exploited Children (NCMEC) told Reuters that it receives about 450 reports of AI child sex abuse each month. That's a tiny fraction of the 3 million monthly reports of child sex abuse occurring in the real world, but cops warned in January that this sudden flood of AI child sex abuse images was making it harder to investigate those real child abuse cases NCMEC tracks. And the chief of the US Department of Justice's computer crime and intellectual property section, James Silver, told Reuters that as more people realize the abusive potential of AI tools, "there's more to come." "What we’re concerned about is the normalization of this,” Silver told Reuters. “AI makes it easier to generate these kinds of images, and the more that are out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.” One of the most popular and seemingly accessible ways that bad actors are perpetrating this harm is by using so-called "nudify" apps that remove clothing from ordinary photos kids otherwise feel safe sharing online. According to Wired, millions of people are using nudify bots on Telegram, including to generate harmful images of children. (That's the same chat app that swore it's not an "anarchic paradise" after its CEO was arrested for alleged crimes, including complicity in distributing child pornography.) At least one state is attempting to crack down on nudify apps themselves rather than target people using nudify apps. In August, San Francisco's city attorney David Chiu sued 16 popular nudify apps on behalf of the people of California, hoping to shut them down for good. Only one nudify app has seemingly responded in that lawsuit, however, and the city attorney moved this week for default judgment against only one of the other non-responsive apps. It's unclear how that lawsuit will play out, even if California wins. The city attorney's office did not respond to Ars' request to comment. Whereas California is attempting to target harmful app makers, some lawmakers agree that targeted laws would help protect victims and potentially deter crimes. In the absence of a federal law imposing penalties for AI-generated child sex abuse materials (CSAM), some states have rushed to pass their own laws protecting kids from brewing AI harms. Public Citizen has a tracker showing where efforts have succeeded and failed. In more than 20 states, laws have been enacted to protect everyone from intimate deepfakes, while five states have enacted laws specifically designed to protect minors. Currently, child safety advocates appear focused on pushing for broad federal laws to protect kids generally online. But in the US, those laws have proven divisive, and a targeted law could potentially fare better if it garnered enough bipartisan support. Until there is a targeted law, US prosecutors told Reuters that they're "stepping up their pursuit of suspects who use artificial intelligence tools to manipulate or create child sex abuse images," but no one is sure if today's laws will support any charges the DOJ brings.

DOJ tests if existing laws can combat surge

Part of the challenge of fighting AI-generated CSAM is that it's not always clear who the victim is. Although some cases involve AI manipulation of real children's photos, other cases involve AI-generated images that do not appear to depict an actual child. Silver told Reuters that in cases involving unidentifiable children, obscenity laws may still apply. Reuters noted that these child exploitation cases are an early test that could soon reveal how hard it is for federal prosecutors to apply existing laws to combat emerging AI harms. But due to anticipated legal challenges, prosecutors may hesitate to bring AI cases where children have not been identified, despite the DOJ declaring in May that "CSAM generated by AI is still CSAM." The earliest cases being prosecuted are targeting bad actors using both the most accessible tools and the most sophisticated. One involves a US Army soldier who allegedly used bots to generate child pornography. The other led a 42-year-old "extremely technologically savvy" Wisconsin man to be charged with allegedly using Stable Diffusion to create "thousands of realistic images of prepubescent minors," which were then allegedly shared with a minor and distributed on Instagram and Telegram. (Stability AI, maker of Stable Diffusion, has repeatedly denied involvement in the development of the Stable Diffusion model used, while promising to prevent other models from generating harmful materials.) Both men have pleaded not guilty, seemingly waiting to see how the courts navigate the complex legal questions that generative AI has raised in the child exploitation world. Some child safety experts are pushing to hold app makers accountable, like California is, advocating for standards that block harmful outputs of AI image generators. But even if every popular app maker agrees, the threat will likely still loom on the dark web and in less-moderated corners of the Internet. Public Citizen democracy advocate Ilana Beller in September urged lawmakers everywhere to clarify laws so that no victim ever has to wonder if there's any defense against the barrage of harmful AI images rapidly spreading online. Only criminalizing AI CSAM like lawmakers have done with actual CSAM will ensure the content is promptly detected and removed, the thinking goes, and Beller wants that same shield to be available to all victims of AI-generated nonconsensual intimate imagery. "The rising tide of non-consensual intimate deepfakes is a threat to everyone from A-list celebrities to middle schoolers," Beller said. "Creating and sharing deepfake porn must be treated like the devastating crime that it is. Legislators in numerous states are making progress, but we need legislation to pass in all 50 states and Washington, DC, in order to ensure all people are protected from the serious harms of deepfake porn."