Last year, as part of the first wave of censorial mandatory editorial transparency laws, New York enacted N.Y. Gen. Bus. Law § 394-ccc. The “has two main requirements: (1) a mechanism for social media users to file complaints about instances of “hateful conduct” and (2) disclosure of the social media network’s policy for how it will respond to any such complaints.” The complaint mechanism defines “hateful conduct” as “(1) conduct that vilifies; (2) conduct that humiliates; and (3) conduct that incites violence.” In response to a constitutional challenge, a federal court preliminarily enjoined the law.
The Court Ruling
The court says that the mandatory disclosure of editorial policies is compelled speech: “the law requires that social media networks devise and implement a written policy—i.e., speech.”
The court treats this mandatory disclosure as requiring publishers to endorse the state’s definition of “hateful conduct”: “the Hateful Conduct Law requires a social media network to endorse the state’s message about ‘hateful conduct’…each social media network’s definition of ‘hateful conduct’ must be at least as inclusive as the definition set forth in the law itself….Requiring Plaintiffs to endorse the state’s definition of ‘hateful conduct’, forces them to weigh in on the debate about the contours of hate speech when they may otherwise choose not to speak.”
The court also sees the mandatory disclosure of editorial policies as an incursion into the publisher’s editorial practices:
Plaintiffs have an editorial right to keep certain information off their websites and to make decisions as to the sort of community they would like to foster on their platforms. It is well-established that a private entity has an ability to make “choices about whether, to what extent, and in what manner it will disseminate speech…” NetChoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196, 1210 (11th Cir. 2022). These choices constitute “editorial judgments” which are protected by the First Amendment….
the Hateful Conduct Law requires social media networks to disseminate a message about the definition of “hateful conduct” or hate speech—a fraught and heavily debated topic today. Even though the Hateful Conduct Law ostensibly does not dictate what a social media website’s response to a complaint must be and does not even require that the networks respond to any complaints or take down offensive material, the dissemination of a policy about “hateful conduct” forces Plaintiffs to publish a message with which they disagree. Thus, the Hateful Conduct Law places Plaintiffs in the incongruous position of stating that they promote an explicit “pro-free speech” ethos, but also requires them to enact a policy allowing users to complain about “hateful conduct” as defined by the state
The court says the Zauderer test does not apply (and it’s not close):
The policy disclosure at issue here does not constitute commercial speech and conveys more than a “purely factual and uncontroversial” message. The law’s requirement that Plaintiffs publish their policies explaining how they intend to respond to hateful content on their websites does not simply “propose a commercial transaction”. Nor is the policy requirement “related solely to the economic interests of the speaker and its audience.”…
the law clearly implicates protected speech—namely hate speech—by requiring a disclosure of the Plaintiffs’ policy for responding to complaints of hateful content. This is different in character and kind from commercial speech and amounts to more than mere disclosure of factual information, such as caloric information or mercury content
Disclosures of editorial practices are distinguishable from nutrition labels and similar compelled commercial disclosures because the former affects speech interests that aren’t present with most goods and services. To me, this is so obvious that I can’t help but question the motivation of those who (often glibly) equate the two.
Instead of Zauderer’s relaxed scrutiny or Central Hudson’s intermediate scrutiny, the court says “the Hateful Conduct Law regulates speech based on its content, [so] the appropriate level of review is strict scrutiny.” Amen!
Unsurprisingly, the law fails strict scrutiny. The state argues the law is designed to prevent mass shootings like the one in Buffalo, but the court says it’s not narrowly tailored to that goal: “it is hard to see how the law really changes the status quo—where some social media networks choose to identify and remove hateful content and others do not.” The legislative history indicates the law was really aimed at online misinformation, but “the First Amendment’s shielding of hate speech from regulation means that a state’s desire to reduce this type of speech from the public discourse cannot be a compelling governmental interest.” Though the law includes violent content in the “hateful conduct” definition, it’s not limited to imminent violence.
The court explains the law’s anti-speech consequences:
the law is clearly aimed at regulating speech. Social media websites are publishers and curators of speech, and their users are engaged in speech by writing, posting, and creating content. Although the law ostensibly is aimed at social media networks, it fundamentally implicates the speech of the networks’ users by mandating a policy and mechanism by which users can complain about other users’ protected speech.
Moreover, the Hateful Conduct law is a content based regulation. The law requires that social media networks develop policies and procedures with respect to hate speech (or “hateful conduct” as it is recharacterized by Defendant). As discussed, the First Amendment protects individuals’ right to engage in hate speech, and the state cannot try to inhibit that right, no matter how unseemly or offensive that speech may be to the general public or the state….
Even though the law does not require social media networks to remove “hateful conduct” from their websites and does not impose liability on users for engaging in “hateful conduct”, the state’s targeting and singling out of this type of speech for special measures certainly could make social media users wary about the types of speech they feel free to engage in without facing consequences from the state. This potential wariness is bolstered by the actual title of the law—“Social media networks; hateful conduct prohibited”—which strongly suggests that the law is really aimed at reducing, or perhaps even penalizing people who engage in, hate speech online….social media users often gravitate to certain websites based on the kind of community and content that is fostered on that particular website. Some social media websites—including Plaintiffs’— intentionally foster a “pro-free speech” community and ethos that may become less appealing to users who intentionally seek out spaces where they feel like they can express themselves freely.
The court also criticizes the vague terms “vilify” and “humiliate.”
While the opinion is mostly good news, the court says Section 230 doesn’t preempt the law:
The law does not impose liability on social media networks for failing to respond to an incident of “hateful conduct”, nor does it impose liability on the network for its users own “hateful conduct”. The law does not even require that social media networks remove instances of “hateful conduct” from their websites.
The 230 preemption argument always seemed like a stretch. Though legislatures have no business meddling with publishers’ editorial operations and decisions, Section 230 by its terms only applies to legal obligations that treat online publishers as publishers/speakers of information provided by others. The mandatory editorial transparency provisions are more akin to other forms of compelled commercial first-party disclosures than an imposition of liability for third-party content. That’s why I think the First Amendment, not Section 230, is the real limit.
Implications
This opinion makes several critical moves:
- It recognizes that mandatory editorial transparency acts like speech restrictions.
- It says Zauderer doesn’t apply and strict scrutiny applies instead. It’s worth the reminder that both the 5th and 11th Circuit applied Zauderer, so this ruling diverges from those precedents. As I’ve explained elsewhere, I think the 5th and 11th Circuits obviously botched their determination that Zauderer applies, so I think this court got it right.
- It rejects the online exceptionalism that social media services are somehow doing something other than publishing content. Instead, the court says flatly: “Social media websites are publishers and curators of speech.”
- Mandatory editorial transparency that focuses on one category of speech is especially problematic when it reaches constitutionally protected speech. Thus, I think this ruling would similarly strike down CA AB 587 because it prioritized five categories of constitutionally protected speech (“Hate speech or racism,” “Extremism or radicalization,” “Disinformation or misinformation,” “Harassment,” and “Foreign political interference”) for heightened disclosure over other categories.
Obviously this opinion won’t be the final word on the NY law. If it stands, it would be for legislatures to draft around this ruling. The legislatures can eliminate the final bullet point by drafting mandatory editorial transparency provisions that are content-agnostic, which is what the Florida and Texas social media censorship laws did. However, this court would still say those laws don’t qualify for Zauderer and still affect publishers’ editorial functions, so I think this ruling predicts that the content-agnostic disclosures will be viewed as speech restrictions subject to strict scrutiny. While this court didn’t opine on the full range of transparency regulatory options, I don’t think there’s any meaningful difference from a constitutionality standpoint.
The court’s categorical rejection of Zauderer highlights how Zauderer evangelists are using the precedent to normalize/justify censorship. This is why the Supreme Court needs to grant cert in the Florida and Texas cases. Ideally the Supreme Court will reiterate that Zauderer is a niche exception of limited applicability that does not include mandatory editorial transparency. Once Zauderer is off the table and legislatures are facing strict scrutiny for their mandated disclosures, I expect they will redirect their censorial impulses elsewhere.
Finally, if this ruling stands, it’s a reminder that the DSA’s mandatory editorial transparency would violate the First Amendment. Anyone celebrating the DSA as a laudable policy might want to acknowledge the possibility that there’s an unbridgeable gap between EU and US regulations.
For more on these topics, see my pieces:
Case citation: Volokh v. James, 2023 WL 1991435 (S.D.N.Y. Feb. 14, 2023)