Skip to content
POLICY

Judge tosses social platforms’ Section 230 blanket defense in child safety case

Judge: Section 230 doesn’t cover platform design defects allegedly harming kids.

Story text
This week, some of the biggest tech companies found out that Section 230 immunity doesn't shield them from some of the biggest complaints alleging that social media platform designs are defective and harming children and teen users. On Tuesday, US district judge Yvonne Gonzalez Rogers ruled that discovery can proceed in a lawsuit documenting individual cases involving hundreds of children and teens allegedly harmed by social media use across 30 states. Their complaint alleged that tech companies were guilty of negligently operating platforms with many design defects—including lack of parental controls, insufficient age verification, complicated account deletion processes, appearance-altering filters, and requirements forcing users to log in to report child sexual abuse materials (CSAM)—and failed to warn young users and their parents about those defects. Defendants are companies operating "the world’s most used social media platforms: Meta’s Facebook and Instagram, Google’s YouTube, ByteDance’s TikTok, and Snapchat." All of these companies moved to dismiss the multi-district litigation entirely, hoping that the First Amendment and Section 230 immunity would effectively bar all the plaintiffs' claims—including, apparently, claims that companies ignored addressing when moving to dismiss. "Defendants were adamant that the entirety of the complaint should be dismissed under Section 230 of the Communications Decency Act of 1996 and the First Amendment," Gonzalez Rogers wrote. Under the 9th Circuit's three-part test to see if Section 230 applies, tech companies must prove that (1) they are providers of an interactive computer service, (2) whom plaintiffs seek to treat as a publisher or speaker, and (3) of information provided by another information content provider. Plaintiffs argued that the second prong wasn't met because their complaint was allegedly entirely aimed at how platforms were designed—not at content published by platforms—and tech companies argued that all prongs were met because all the alleged harms allegedly stemmed from content on their platforms. Gonzalez Rogers found that both sides' "all or nothing" approaches to the motions to dismiss did "not sufficiently address the complexity of the issues facing this litigation" or "fairly or accurately represent the Ninth Circuit’s application of Section 230 immunity." Gonzalez Rogers' ruling sheds new light on how courts interpret the scope of Section 230 immunity. She wrote that "the issue is whether the defendant’s alleged duty to the plaintiff could 'have been satisfied without changes to the content posted by the website’s users and without conducting a detailed investigation.'" While platforms hoped she would find that none of their duties could have been satisfied without those conditions being met, Gonzalez Rogers identified nine alleged design defects that "do not implicate publishing or monitoring of third-party content," "are not equivalent to speaking or publishing," and "can be fixed by defendants without altering the publishing of third-party content" and "thus are not barred by Section 230." Alleged defects not barred by Section 230 include: not providing effective parental controls; not providing options for young users to self-restrict time used on the platform; making it challenging for users to delete their accounts; not using robust age verification; making it challenging for users to report predators and inappropriate content; offering appearance-altering filters; not labeling filtered content; the timing of notifications alleged to increase addictive use; and not implementing "reporting protocols to allow users or visitors of defendants’ platforms to report CSAM and adult predator accounts specifically without the need to create or log in to the products prior to reporting," the order said.

Blanket immunity dreams dashed

Tech companies were seemingly so confident that Section 230 would bar all these claims that they did not even address some of the surviving claims, failing to "directly address application of Section 230 to the parental control-related defects," filter-related allegations, or "how altering the ways in which they allow users and visitors to their platforms to report CSAM is barred by Section 230," Gonzalez Rogers wrote. Further, Gonzalez Rogers noted that tech companies argued at a hearing that "making reporting more accessible would necessarily require them to remove CSAM, in violation of Section 230." The companies were similarly silent on several defects determined not to be barred by First Amendment defenses, Gonzalez Rogers wrote. While other Section 230 defenses did protect platforms from some claims—such as "use of algorithms to promote addictive engagement" and "recommending minor accounts to adult strangers"—plaintiffs will be allowed to proceed with discovery to support surviving claims. That could result in a better public understanding of how platforms are thinking about child safety as advocates and regulators continue protesting the current platform norms that they believe have created a "youth mental health crisis," the order said. Court-appointed lead plaintiffs’ counsel in the litigation—Lexi Hazam of Lieff Cabraser Heimann & Bernstein, LLP; Previn Warren of Motley Rice LLC; and Chris Seeger of Seeger Weiss LLP—provided Ars with a statement on Gonzalez Rogers' order. "Today’s decision is a significant victory for the families that have been harmed by the dangers of social media," their statement said. "The Court’s ruling repudiates Big Tech’s overbroad and incorrect claim that Section 230 or the First Amendment should grant them blanket immunity for the harm they cause to their users." Some platforms have taken steps recently to enhance child safety, including TikTok giving parents device-level controls to stop teens from downloading the app and Meta creating stronger default privacy settings for minors on Facebook and Instagram. More recently, Meta has pushed lawmakers to require app stores to get parental permission for minors to download any app, The Washington Post reported. But so far, no platform has met the high standards of the most vocal parents advocating for even stronger child safety protections or lawmakers' expectations in the European Union—which has stricter privacy laws than the US. This fall, the EU has required platforms like YouTube and TikTok to detail their child protective measures by November 30 or else face fines as much as 6 percent of their global turnover for any failures to protect children from illegal and harmful content, Reuters reported. Until laws clarify platforms' liabilities for causing alleged harms to young users, lawyers representing children and teens suing social media companies suggested in their statement that courts must carefully interpret current laws and intervene to stop a spiraling mental health crisis that platforms cannot be expected to fully address on their own. Young plaintiffs suing hope the court will ultimately rule that "defendants owe users the duty to design such products in a reasonably safe manner and to warn about risks they pose." "The mental health crisis among American youth is a direct result of these defendants’ intentional design of harmful product features," plaintiffs' lawyers said. "We will continue to fight for those who have been harmed by the misconduct of these social media platforms, and ensure they are held accountable for knowingly creating a vast mental health crisis that they still refuse to acknowledge and address.”