A federal judge yesterday blocked a Texas state law that bans "censorship" on social media platforms, ruling that the law violates the social networks' First Amendment right to moderate user-submitted content.
"Social media platforms have a First Amendment right to moderate content disseminated on their platforms," Judge Robert Pitman wrote. He found that the Texas law "compels social media platforms to disseminate objectionable content and impermissibly restricts their editorial discretion" and that the law's "prohibitions on 'censorship' and constraints on how social media platforms disseminate content violate the First Amendment."
Pitman's ruling granted a preliminary injunction requested by tech industry groups NetChoice and the Computer & Communications & Industry Association (CCIA), which sued Texas in US District Court for the Western District of Texas. Facebook, Google, Twitter, and various other tech companies belong to the groups.
The injunction prohibits the Texas attorney general from enforcing the law. The judge found that the plaintiffs are likely to succeed on the merits of their case, a prerequisite for granting a preliminary injunction.
“Replete with constitutional defects”
Pitman rejected a severability clause that aimed to save parts of the law if the rest was invalidated. He found that nothing can be severed from the law and survive because the unconstitutional portions are "replete with constitutional defects, including unconstitutional content and speaker-based infringement on editorial discretion and onerously burdensome disclosure and operational requirements." "This ruling upholds the First Amendment and protects Internet users," CCIA President Matt Schruers said yesterday. "Without this temporary injunction, Texas's social media law would make the Internet a more dangerous place by tying the hands of companies protecting users from abuse, scams, or extremist propaganda... The First Amendment ensures that the government can't force a citizen or company to be associated with a viewpoint they disapprove of, and that applies with particular force when a state law would prevent companies from enforcing policies against Nazi propaganda, hate speech, and disinformation from foreign agents." When Texas Gov. Greg Abbott signed the bill in September, he claimed the law is needed to protect Texans' First Amendment rights against "a dangerous movement by social media companies to silence conservative viewpoints and ideas." Florida tried to impose a similar social media law but it was also blocked by a federal judge for violating the First Amendment.Parler and Gab excluded from law
Under the Texas law, a platform that labels a post as misinformation "may be discriminating against that user's viewpoint by adding its own disclaimer," Pitman wrote. The law thus "restricts social media platforms' First Amendment right to engage in expression when they disagree with or object to content." The threat of lawsuits for violating the state law "chills the social media platforms' speech rights." Pitman also found that the law's disclosure and operational requirements impose burdens on social media platforms' editorial discretion, and that the law "discriminates based on content and speaker." Pitman noted that Texas lawmakers excluded conservative social networks Parler and Gab by applying the law only to platforms with 50 million or more monthly active users in the US. One state senator "unsuccessfully proposed lowering the threshold to 25 million monthly users in an effort to include" sites like Parler and Gab, Pitman wrote.Social networks aren’t common carriers
One key question is whether social media platforms can be classified as common carriers. "This Court starts from the premise that social media platforms are not common carriers... Unlike broadband providers and telephone companies, social media platforms 'are not engaged in indiscriminate, neutral transmission of any and all users' speech,'" Pitman wrote. "User-generated content on social media platforms is screened and sometimes moderated or curated. The State balks that the screening is done by an algorithm, not a person, but whatever the method, social media platforms are not mere conduits." Social media platforms exercise editorial discretion by allowing or banning content, adding their own content, and arranging user-submitted content in different ways, Pitman wrote:Making those decisions entails some level of editorial discretion, even if portions of those tasks are carried out by software code. While this Court acknowledges that a social media platform's editorial discretion does not fit neatly with our 20th Century vision of a newspaper editor hand-selecting an article to publish, focusing on whether a human or AI makes those decisions is a distraction... the core question is still whether a private company exercises editorial discretion over the dissemination of content, not the exact process used. Plaintiffs' members also push back on the idea that content moderation does not involve judgment. For example, Facebook states that it makes decisions about "billions of pieces of content" and "[a]ll such decisions are unique and context-specific[] and involve some measure of judgment."Social media networks regulated by the Texas law "curate both users and content to convey a message about the type of community the platform seeks to foster and, as such, exercise editorial discretion over their platform's content," Pitman added. If this case reaches the Supreme Court, Justice Clarence Thomas is likely to be sympathetic to claims that social networks are common carriers. In April 2021, Thomas argued in a concurring opinion in a case involving former President Donald Trump and Twitter that social networks could face some First Amendment restrictions even though they are not government agencies. He argued that free-speech law shouldn't necessarily prevent lawmakers from regulating those platforms as common carriers. However, courts have generally found that the First Amendment's free speech clause does not prohibit private companies from restricting speech on their platforms.
What the Texas law says
Here's how we summarized Texas' HB 20 law in previous coverage:The Texas law labels social media platforms as "common carriers" and applies its restrictions to social media platforms with over 50 million active users in the US. "A social media platform may not censor a user, a user's expression, or a user's ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user's expression or another person's expression; or (3) a user's geographic location in this state or any part of this state," the law says. The bill's definition of "censor" is "block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression." The Texas attorney general or users can sue social media platforms that violate this ban and win injunctive relief and reimbursement of court costs, the law says.The law has exceptions for content involving sexual exploitation of children and for content that "directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge; or is unlawful expression."
Hitler video example demonstrates law’s problem
The Lone Star State claimed that the law gives social networks enough leeway, but the judge was not convinced. Pitman wrote:HB 20 prohibits social media platforms from moderating content based on "viewpoint." The State emphasizes that HB 20 "does not prohibit content moderation. That is clear from the fact that [HB 20] has an entire provision dictating that the companies should create acceptable use policies... [a]nd then moderate their content accordingly." The State claims that social media platforms could prohibit content categories "such as 'terrorist speech,' 'pornography,' 'spam,' or 'racism'" to prevent those content categories from flooding their platforms. During the hearing, the State explained that a social media platform "can't discriminate against users who post Nazi speech... and [not] discriminate against users who post speech about the anti-white or something like that." Plaintiffs point out the fallacy in the State's assertion with an example: a video of Adolf Hitler making a speech, in one context the viewpoint is promoting Nazism, and a platform should be able to moderate that content, and in another context the viewpoint is pointing out the atrocities of the Holocaust, and a platform should be able to disseminate that content. HB 20 seems to place social media platforms in the untenable position of choosing, for example, to promote Nazism against its wishes or ban Nazism as a content category. As YouTube put it, "YouTube will face an impossible choice between (1) risking liability by moderating content identified to violate its standards or (2) subjecting YouTube's community to harm by allowing violative content to remain on the site."