Skip to content
POLICY

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Elon Musk's X previously failed to block the law on First Amendment grounds.

Story text
Elon Musk's fight defending X's content moderation decisions isn't just with hate speech researchers and advertisers. He has also long been battling regulators, and this week, he seemed positioned to secure a potentially big win in California, where he's hoping to permanently block a law that he claims unconstitutionally forces his platform to justify its judgment calls. At a hearing Wednesday, three judges in the 9th US Circuit Court of Appeals seemed inclined to agree with Musk that a California law requiring disclosures from social media companies that clearly explain their content moderation choices likely violates the First Amendment. Passed in 2022, AB-587 forces platforms like X to submit a "terms of service report" detailing how they moderate several categories of controversial content. Those categories include hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference, which X's lawyer, Joel Kurtzberg, told judges yesterday "are the most controversial categories of so-called awful but lawful speech." The law would seemingly require more transparency than ever from X, making it easy for users to track exactly how much controversial content X flags and removes—and perhaps most notably for advertisers, how many users viewed concerning content. To block the law, X sued in 2023, arguing that California was trying to dictate its terms of service and force the company to make statements on content moderation that could generate backlash. X worried that the law "impermissibly" interfered with both "the constitutionally protected editorial judgments" of social media companies, as well as impacted users' speech by requiring companies "to remove, demonetize, or deprioritize constitutionally protected speech that the state deems undesirable or harmful." Any companies found to be non-compliant could face stiff fines of up to $15,000 per violation per day, which X considered "draconian." But last year, a lower court declined to block the law, prompting X to appeal, and yesterday, the appeals court seemed more sympathetic to X's case. At the hearing, Kurtzberg told judges that the law was "deeply threatening to the well-established First Amendment interests" of an "extraordinary diversity of" people, which is why X's complaint was supported by briefs from reporters, freedom of the press advocates, First Amendment scholars, "conservative entities," and people across the political spectrum. All share "a deep concern about a statute that, on its face, is aimed at pressuring social media companies to change their content moderation policies, so as to carry less or even no expression that's viewed by the state as injurious to its people," Kurtzberg told judges. When the court pointed out that seemingly the law simply required X to abide by content moderation policies for each category defined in its own terms of service—and did not compel X to adopt any policy or position that it did not choose—Kurtzberg pushed back. "They don't mandate us to define the categories in a specific way, but they mandate us to take a position on what the legislature makes clear are the most controversial categories to moderate and define," Kurtzberg said. "We are entitled to respond to the statute by saying we don't define hate speech or racism. But the report also asks about policies that are supposedly, quote, 'intended' to address those categories, which is a judgment call." "This is very helpful," Judge Anthony Johnstone responded. "Even if you don't yourself define those categories in the terms of service, you read the law as requiring you to opine or discuss those categories, even if they're not part of your own terms," and "you are required to tell California essentially your views on hate speech, extremism, harassment, foreign political interference, how you define them or don't define them, and what you choose to do about them?" "That is correct," Kurtzberg responded, noting that X considered those categories the most "fraught" and "difficult to define."

California insisted X misreads the law

Disputing X's reading of the law, California Department of Justice attorney Gabrielle Boutin argued that the law "merely compels commercial product disclosures" from social media companies to ensure that their products work as their terms of service describe. By law, Boutin argued, a disclosure is considered "uncontroversial" if it requires a statement of fact "about what the product does or does not do"—"even if it can be tied in some way to a controversial issue, and even if it can be used by others to support arguments in a heated political controversy." According to Boutin, the question isn't whether the law is related to a controversial issue or whether platforms should moderate these categories of content, but whether social media companies are being "forced to state a message not about your product and that is contrary to your message." Because the law only asks X to back up what it says in its terms of service with transparency reporting, California argued X is not. The judges seemed unconvinced, however, that AB-587's disclosures were similar to consumer protection laws, as Boutin suggested, like those requiring tobacco or food companies to disclose what's in their products. Judge Milan Smith said X's case was "totally different" because stating what’s in a cigarette or non-GMO food is a fact, "but here you're compelling people to say things that they may or may not want to talk about at all." "We're all concerned about the fact you've got this list of things that have to be disclosed," Judge Smith told Boutin. "The state is saying you must tell us what, basically, what you think about each of these things. And if you don't, you got to tell us that, too. Why is that not compelled speech?" It's hard to tell yet how the appeals court will rule. While X is hoping that the appeals court ruling will trigger an injunction blocking the law, judges seemed less convinced that the law could not be upheld if the categories at the heart of X's complaint were severed. Seemingly, the court would consider the law to be less controversial if, for example, the law required X to generally report on content moderation without breaking out categories of harmful content to be tracked. But X argued there would be "very little left" to enforce if the categories were removed. At the end of the hearing, Judge Johnstone asked Boutin if she agreed that the terms of service's reporting requirements were "entirely keyed to the categories, so that if the categories don't exist, there's nothing to report?" Boutin declined to give a straight answer, urging the appeals court to remand the case to the lower court if the appeals court decides that any part of the law should be enjoined.