From the First Amendment to Section 230, Eric Goldman Explains the Online Speech Law in the U.S.
Publication Date: April 19, 2023 8 minute readProfessor Eric Goldman from his article, “The United States’ Approach to ‘Platform’ Regulation”Given the partisan split on content moderation expectations, Internet services are now routinely subject to partisan attacks on all sides. The services cannot satisfactorily navigate these attacks, because (as with any partisan topic) any accommodations to one team automatically anger the other team. As a result, the partisanship creates a dangerous ecosystem for terrible policy ideas, especially when one partisan party controls all of the applicable regulatory apparatus.
Professor Eric Goldman, Santa Clara University School of Law, is highly regarded for his expertise in internet law and specifically Section 230. [Section 230 of the 1996 Communications Decency Act shields internet platforms from liability for users’ activity.] His recent article, “The United States’ Approach to 'Platform' Regulation” provides an overview of online speech law in the U.S.
In this article, Professor Goldman discusses the United States’ legal framework governing Internet platforms that publish third-party content. He highlights three key features of U.S. law as well as a fourth endeavor of government agencies. These four topics are:
- The First Amendment, constitutional protections for free speech and press;
- Section 230, the statutory immunity provided by 47 U.S.C. § 230 of the 1996 Communications Decency Act;
- Limits on state regulation of the Internet;
- U.S. governmental efforts to impose mandatory transparency obligations on Internet platforms.
Below are excerpts from “The United States’ Approach to 'Platform' Regulation” by Eric Goldman (April 13, 2023, published as part of the Defeating Disinformation UnConference).
What Is a Platform?
The term “platform” has emerged as a standard descriptor for Internet services and has been incorporated into U.S. legislation. Nevertheless, this term is problematic.
First, the term “platform” often refers to Internet services that facilitate user-to-user communications—with Facebook viewed as the archetypical “platform” and the primary target for regulatory intervention. Yet, user-to-user online communications take place in a wide range of services, including consumer review services, wikis, message boards, private messaging services (such as email services and private chat tools), online marketplaces, online dating services, livestreaming video services, video hosting services, “social media,” and much more.
Second, the term “platform” can obscure or trivialize a service’s editorial and publication functions. The nomenclature implies that the entities do not qualify for stringent constitutional protections applicable to “publishers.” For example, “platforms” are sometimes analogized to common carriers or utilities because those entities get diminished constitutional protections. Alternatively, sometimes platforms are analogized to “public squares” or other government-provided functions that must comply with the constitution. Because the term “platform” subverts Internet services’ editorial, curatorial, and publishing functions, it has significant political valence and consequences.
To sidestep the semantic and substantive problems created by the “platform” term, this paper instead uses the term “Internet services” to cover all online services that publish third-party content.
The First Amendment
Content Protected by The First Amendment
Some types of content are characterized as receiving no First Amendment protection at all, including child sexual abuse material (CSAM), obscenity, incitements to violence, and defamation. However, each of those exclusions are defined narrowly. For example, incitements to violence may be punished only when the speech is likely to lead to imminent unlawful violence.
As a result, many categories of speech that are regulated internationally may be Constitutionally protected in the United States and thus subject to little or no government restriction. Some examples:
- “hate speech.” Unless the speech fits into one of the content categories that do not get First Amendment protection, it is not Constitutionally permitted to restrict odious views about other people based on immutable characteristics. As just one example, the First Amendment protects distribution of Nazi images and paraphernalia and public support for the Nazi party and its ideals (including declarations of Nazi affiliation).
- “cyberbullying.” “Cyberbullying” activities are often protected by the First Amendment, including typical online bullying behavior like name-calling, dehumanizing references, brigading, doxing, and incivil behavior. Cyberbullying behavior becomes actionable only in extreme cases, such as when it constitutes criminal stalking, criminal harassment, or imminent threats of violence.
- “misinformation.” The First Amendment protects many categories of “false” information. False political statements are routinely permitted unless they are defamatory, and the First Amendment imposes heightened standards for defamation claims in those circumstances. The First Amendment also protects health misinformation provided by non-experts, such as scientifically unsupported statements against vaccines or downplaying concerns about the COVID pandemic.
Protection for Internet Services as “Publishers”
When Internet services perform editorial or curatorial functions, the First Amendment protects those functions to the same degree it would protect offline content publishers. As one court explained, “Like a newspaper or a news network, Twitter makes decisions about what content to include, exclude, moderate, filter, label, restrict, or promote, and those decisions are protected by the First Amendment.” (O'Handley v. Padilla, 2022 WL 93625)
Overview of Section 230
In 1996, Congress enacted 47 U.S.C. § 230 as part of the Communications Decency Act.
Translating these provisions into simpler language:
- Section 230(c)(1) says that websites and other online services are not liable for third-party content.
- Section 230(c)(2)(A) says that websites and other online services are not liable for the content filtering decisions they make, even if they are partially responsible for the content.
- Section 230(c)(2)(B) says that vendors of anti-threat software filters, such as anti-spam, anti-spyware, anti-virus, and parental choice vendors, are not liable for their blocking decisions.
Defendants need Section 230(c)(1)’s immunity only when the law would otherwise impose liability. This immunization is not a technical loophole or a “get-out-of-jail-free” card. It is a critical policy choice that Congress made to get the benefits provided by Internet services that otherwise would be foreclosed by liability concerns.
Section 230(c)(1) is a globally unique policy. No other country has adopted a legal rule like it.
State Regulation of the Internet
The United States vests regulatory power both in the federal government and in sub-national regulators, such as state legislatures. In practice, however, state legislatures have numerous limits on their authority over Internet services, including the First Amendment, Section 230 (which expressly preempts most conflicting state laws regarding third-party content), federal preemption, and Constitutional limits on personal jurisdiction.
Transparency as Regulation
In addition to, or instead of, dictating content moderation decisions outright, legislatures are requiring Internet services to provide greater “transparency” about their editorial practices and operations. These regulations can take many forms, including requiring Internet services to publish their editorial policies, provide explanations to affected users about content moderation decisions, and publish aggregated statistics about their content moderation practices.
Whether or not states create new transparency requirements for Internet services, government enforcement agencies are demanding disclosures from Internet services by invoking consumer protection laws, such as state “UDAP” (unfair and deceptive acts or practices) laws and other restrictions on “false” advertising.
Conclusion
Online speech freedoms have become inextricably intertwined with partisan politics, which creates irreconcilable conflicts. Oversimplified, Democrats want Internet services to remove more content, even if it is Constitutionally protected; while Republicans want Internet services to publish more content, even content that hurts society or the Internet service’s audience. Although both sides are unhappy with the current legal framework governing Internet services and would favor censorial interventions, their solutions advance two radically different visions of the Internet’s future.
Given the partisan split on content moderation expectations, Internet services are now routinely subject to partisan attacks on all sides. The services cannot satisfactorily navigate these attacks, because (as with any partisan topic) any accommodations to one team automatically anger the other team. As a result, the partisanship creates a dangerous ecosystem for terrible policy ideas, especially when one partisan party controls all of the applicable regulatory apparatus. This means the U.S. legal framework described by this paper could change dramatically—and almost certainly not for the better—imminently
Read the full article: “The United States’ Approach to 'Platform' Regulation” by Eric Goldman (April 13, 2023, published as part of the Defeating Disinformation UnConference).
Read More
- “New Op-Ed: People Who Understand Section 230 Actually Love It” by Eric Goldman (TAP Blog, January 21, 2021)
- “Content Moderation Remedies” by Eric Goldman (Michigan Technology Law Review, Vol. 28, pp. 1-59, 2021)
- “Want to Learn More About Section 230? A Guide to My Work” by Eric Goldman (TAP Blog, July 16, 2020)
- “An Overview of the United States’ Section 230 Internet Immunity” by Eric Goldman (The Oxford Handbook of Online Intermediary Liability 155, permitted)
- “The Ten Most Important Section 230 Rulings” by Eric Goldman (Tulane Journal of Technology & Intellectual Property, Vol. 20, Fall 2017)
Eric Goldman is a Professor of Law at Santa Clara University School of Law, where he is also Director of the school’s High Tech Law Institute. His research and teaching focuses on Internet law, intellectual property and marketing law.
About Eric Goldman
Eric Goldman is a Professor of Law at Santa Clara University School of Law, where he is also Director of the school’s High Tech Law Institute. His research and teaching focuses on Internet law, intellectual property and marketing law.