Cops aren't sure how to protect kids from an ever-escalating rise in fake child sex abuse imagery fueled by advances in generative AI.
Last year, child safety experts warned of thousands of "AI-generated child sex images" rapidly spreading on the dark web around the same time the FBI issued a warning that "benign photos" of children posted online could be easily manipulated to exploit and harm kids.
So far, US prosecutors have only brought two criminal cases in 2024 attempting to use existing child pornography and obscenity laws to combat the threat, Reuters reported on Thursday. Meanwhile, as young girls are increasingly targeted by classmates in middle and high schools, at least one teen has called for a targeted federal law designed to end the AI abuse.
While it's hard to understand the full extent of the threat because kids often underreport sex abuse, the National Center for Missing and Exploited Children (NCMEC) told Reuters that it receives about 450 reports of AI child sex abuse each month. That's a tiny fraction of the 3 million monthly reports of child sex abuse occurring in the real world, but cops warned in January that this sudden flood of AI child sex abuse images was making it harder to investigate those real child abuse cases NCMEC tracks.
And the chief of the US Department of Justice's computer crime and intellectual property section, James Silver, told Reuters that as more people realize the abusive potential of AI tools, "there's more to come."
"What we’re concerned about is the normalization of this,” Silver told Reuters. “AI makes it easier to generate these kinds of images, and the more that are out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”
One of the most popular and seemingly accessible ways that bad actors are perpetrating this harm is by using so-called "nudify" apps that remove clothing from ordinary photos kids otherwise feel safe sharing online. According to Wired, millions of people are using nudify bots on Telegram, including to generate harmful images of children. (That's the same chat app that swore it's not an "anarchic paradise" after its CEO was arrested for alleged crimes, including complicity in distributing child pornography.)
At least one state is attempting to crack down on nudify apps themselves rather than target people using nudify apps. In August, San Francisco's city attorney David Chiu sued 16 popular nudify apps on behalf of the people of California, hoping to shut them down for good. Only one nudify app has seemingly responded in that lawsuit, however, and the city attorney moved this week for default judgment against only one of the other non-responsive apps.
It's unclear how that lawsuit will play out, even if California wins. The city attorney's office did not respond to Ars' request to comment.
Whereas California is attempting to target harmful app makers, some lawmakers agree that targeted laws would help protect victims and potentially deter crimes. In the absence of a federal law imposing penalties for AI-generated child sex abuse materials (CSAM), some states have rushed to pass their own laws protecting kids from brewing AI harms. Public Citizen has a tracker showing where efforts have succeeded and failed. In more than 20 states, laws have been enacted to protect everyone from intimate deepfakes, while five states have enacted laws specifically designed to protect minors.
Currently, child safety advocates appear focused on pushing for broad federal laws to protect kids generally online. But in the US, those laws have proven divisive, and a targeted law could potentially fare better if it garnered enough bipartisan support. Until there is a targeted law, US prosecutors told Reuters that they're "stepping up their pursuit of suspects who use artificial intelligence tools to manipulate or create child sex abuse images," but no one is sure if today's laws will support any charges the DOJ brings.