After years of controversies over plans to scan iCloud to find more child sexual abuse materials (CSAM), Apple abandoned those plans last year. Now, child safety experts have accused the tech giant of not only failing to flag CSAM exchanged and stored on its services—including iCloud, iMessage, and FaceTime—but also allegedly failing to report all the CSAM that is flagged.
The United Kingdom’s National Society for the Prevention of Cruelty to Children (NSPCC) shared UK police data with The Guardian showing that Apple is "vastly undercounting how often" CSAM is found globally on its services.
According to the NSPCC, police investigated more CSAM cases in just the UK alone in 2023 than Apple reported globally for the entire year. Between April 2022 and March 2023 in England and Wales, the NSPCC found, "Apple was implicated in 337 recorded offenses of child abuse images." But in 2023, Apple only reported 267 instances of CSAM to the National Center for Missing & Exploited Children (NCMEC), supposedly representing all the CSAM on its platforms worldwide, The Guardian reported.
Large tech companies in the US must report CSAM to NCMEC when it's found, but while Apple reports a couple hundred CSAM cases annually, its big tech peers like Meta and Google report millions, NCMEC's report showed. Experts told The Guardian that there's ongoing concern that Apple "clearly" undercounts CSAM on its platforms.
Richard Collard, the NSPCC's head of child safety online policy, told The Guardian that he believes Apple's child safety efforts need major improvements.
“There is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” Collard told The Guardian. “Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the rollout of the Online Safety Act in the UK.”
Outside the UK, other child safety experts shared Collard's concerns. Sarah Gardner, the CEO of a Los Angeles-based child protection organization called the Heat Initiative, told The Guardian that she considers Apple's platforms a "black hole" obscuring CSAM. And she expects that Apple's efforts to bring AI to its platforms will intensify the problem, potentially making it easier to spread AI-generated CSAM in an environment where sexual predators may expect less enforcement.
"Apple does not detect CSAM in the majority of its environments at scale, at all," Gardner told The Guardian.
Gardner agreed with Collard that Apple is "clearly underreporting" and has "not invested in trust and safety teams to be able to handle this" as it rushes to bring sophisticated AI features to its platforms. Last month, Apple integrated ChatGPT into Siri, iOS and Mac OS, perhaps setting expectations for continually enhanced generative AI features to be touted in future Apple gear.
“The company is moving ahead to a territory that we know could be incredibly detrimental and dangerous to children without the track record of being able to handle it,” Gardner told The Guardian.
So far, Apple has not commented on the NSPCC's report. Last September, Apple did respond to the Heat Initiative's demands to detect more CSAM, saying that rather than focusing on scanning for illegal content, its focus is on connecting vulnerable or victimized users directly with local resources and law enforcement that can assist them in their communities.
Spiking sextortion, surge in AI-generated CSAM
Last fall, as Apple shifted its focus from detecting CSAM to supporting victims, every state attorney general in the US signed a letter to Congress urging lawmakers to get serious about studying how children could be harmed by AI-generated CSAM.
Lawmakers entertained legislation but largely have not acted quickly enough to get ahead of the problem, child safety experts worry. By January, US law enforcement had sounded alarms, warning that a "flood" of AI-generated CSAM was making it harder to investigate real-world child abuse. Adding to concerns, Human Rights Watch (HRW) researchers discovered that popular AI models were being trained on real photos of kids, even when parents use stricter privacy settings on social media, seemingly increasing the likelihood that AI-generated CSAM might resemble those real kids.
Add to that the FBI's growing concern over child sextortion online—the agency recently reported "a huge increase in the number of cases involving children and teens being threatened and coerced into sending explicit images online"—and it's easy to see why communities are demanding a stronger response from lawmakers who might hold platforms more accountable.
If spiking sextortion increases the risk of CSAM spreading online and explicit images are then used to generate more harmful content, improvements in AI could make it even more dangerous for kids to share content online, HRW researchers fear. Even posting innocuous photos on social media could harm kids in middle and high school as increasingly targeted explicit deepfakes essentially swap the faces of any kid with victims of child sex abuse.
Either way, for kids targeted by AI-generated CSAM, the harms are real. For the actual child abuse victims, the risk of the deepfake trend perpetuating as a more common schoolyard prank would likely increase the risks of being re-traumatized as their abuse materials are potentially repurposed again and again, while victims of AI-generated CSAM report anxiety and depression, as well as feeling unsafe at school.
Because the spread of child sex images online is traumatizing, whether it's the result of extortion by strangers, harassment by peers, or AI-generated fakery, the line between real CSAM and AI-generated CSAM has blurred globally.
This month, a youth court in Spain ordered penalties for 15 teens who created naked AI images of classmates, charging them with 20 counts of creating child sex abuse images. In the US, the Department of Justice agreed that “CSAM generated by AI is still CSAM” in a rare case involving the arrest of a 42-year-old Wisconsin man who allegedly used Stable Diffusion to create "thousands of realistic images of prepubescent minors," which were then distributed on Instagram and Telegram.
While Apple experiences backlash from child safety experts demanding more action against CSAM on its platforms ahead of the company adding AI features to its products, other tech companies more widely reporting CSAM are being urged to update policies to reflect new AI threats.
In April, for example, Meta's oversight board announced it would review two cases involving AI-generated sexualized images of female celebrities that Meta initially handled unevenly to "assess whether Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery." It has been months since that probe was announced, however, and the board has seemingly not yet reached a decision, suggesting that without clear legal guidance, even experts dedicated to enhancing public safety online may struggle to weigh the harms of explicit AI-generated images spreading on platforms.