Section 230 of the Communications Decency Act, which protects publishers of third-party content, is the foundation of the modern internet. Recent debate has focused on algorithmic curation and content moderation, but this misses the heart of the issue.
I started my journey down this rabbit hole by reading an interview that The Verge did with Jeff Kosseff: why the internet’s most important law exists and how people are still getting it wrong. Kosseff is the author of the book The Twenty-Six Words that Created the Internet. Those words refer to Section 230 of the Communications Decency Act:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Kosseff’s thesis is that this law was central to the creation of today’s internet. Without Section 230, the internet would be a radically different place.
I also watched a discussion at the CATO Institute last year featuring both Kosseff and other scholars. The introduction was already interesting:
Regulators have long grappled with the challenge of preventing libel, media bias, radicalization, harrassment without supressing free expression and the positive democratic externalities thereof. There is, however, a tendency, a temptation, to treat these questions as novel, borne of specific political controversies or particular features of currently dominant speech platforms. These tensions … long predate social media, and the internet itself.
To understand Section 230, we have to go back to the 1950s, and we have to treat these questions with a platform-agnostic view. The following is based on the contents of the interview and discussion both linked above.
The basis for Section 230 predates the internet and goes back to 1950. A Los Angeles bookstore owner was jailed for having an erotic novel in the store, convicted under a local law stating that if a storekeeper carried obscene material, they could be held responsible. The Supreme Court struck down this ordinance, saying that there’s no way a distributor could review all of their content before selling it.
Fast forward 40 years, and early internet services have emerged. One platform (CompuServe) didn’t moderate content at all, and another (Prodigy) did. Both were sued for defamation. Judges held that the first was not held liable because they, like the LA bookstore, were just a distributor. The second, though, was liable; they were considered a publisher of content, because of the moderation and other curation efforts.
You get this really weird rule where these online platforms can reduce their liability by not moderating content.
A couple years later, Congress was beginning to regulate the emerging internet. Sen. James Exon proposed that, in order for minors to not be exposed to objectionable content, platforms be penalized for hosting this material. The tech companies objected, saying that they were better positioned than the government to determine what is appropriate for minors. If the companies didn’t police their platforms appropriately, they argued, then their users would simply leave.
The House, meanwhile, created what would become Section 230 of the Communications Decency Act. Proposed by Rep. Chris Cox and Rep. Ron Wyden, the “26 words” were passed at the same time as Exon’s bill—the latter was struck down by the Supreme Court the next year, and so Section 230 remained.
The first known application of the law was in the case Zeran v. America Online. Kenneth Zeran had personal information posted about him on AOL, which lead to his harassment; Zeran sued AOL, but courts found that AOL was not liable for the content posted there.
Kosseff says that, according to Chris Cox, the Zeran interpretation was precisely what was intended by Section 230. It existed to protect platforms.
Kosseff continues with how this has framed the present day debate about content moderation:
And I need to be clear: the platforms are not doing enough to moderate. Also … there is not going to be a perfect solution, because no matter what you do, unless you ban all user content, some bad content is going to get in.
That’s one side of the debate. He goes on with the other:
The internet under Section 230, I believe, has held up a mirror to society; to say this is what we look like and there are some bad people who will do bad things, like defame someone and make these claims that could really ruin someone’s life. We have issues like revenge porn, terrorists recruiting on social media, sex trafficking, harassment … and Section 230 does allow the platforms to take a hands off approach.
Koseff makes clear: repealing Section 230, as some (including presidential candidate Joe Biden) propose, would destroy the modern internet. Today’s largest websites, for better or for worse, rely on user-generated content, and on the fact that the websites cannot be held liable for that content.
“[Section 230] created the social structure of the internet we know today,” he confidently states.
That’s not necessarily a problem—one may argue that such a rebuilding of the internet would be a good thing, though I disagree. But if Section 230 is to be repealed or even rewritten, one must approach the discussion with all of the necessary nuance.
The CATO Institute book club continued with remarks by Emma Llansó, who is the director of the Center for Democracy and Technology’s Free Expression Project. One of the most compelling parts of her talk was the intersection of 230 and the First Amendment:
A lot of the content that people want to see platforms taking more action against … we’re talking about speech that is protected by the First Amendment … we’re talking about speech that it’d be difficult to compel a platform to take action against, even if 230 didn’t exist. You’d be trying to craft a law that required a content host to take down someone’s lawful, constitutionally protected speech. That’s probably not going to get very far with the Supreme Court.
Content like disinformation or even certain kinds of hate speech, however reprehensible, is constitutionally protected. That doesn’t mean that platforms can’t moderate it, though: precisely the opposite is true. Section 230 enables platforms to moderate content however they want. They can restrict speech based on users’ viewpoints, if they so choose. They can also take a hands-off approach.
In principle (*), users who disagree with a platform’s moderation choices are free to leave for another. There is evidence to suggest that this happens: when Reddit banned several subreddits in 2015, one study found that this likely resulted in the migration of users to other sites.
One can argue whether or not this ban helped to combat hate speech more broadly, or whether it led to the growth of more extreme communities on other sites. But Reddit decided they didn’t want certain content on their platform, and so they banned it. Section 230 gave them the power to do this.
(*) I say “in principle” because I’m not totally convinced that this is true of the major social media platforms. I can delete my Facebook account, or choose not to use YouTube, but those websites create such a centralizing force over the internet and their userbases that it’s hard to argue that realistic alternatives exist.
This all gets at the crux of what I find interesting about this law: the intersection between Section 230 and the nebulous idea of “neutrality.” I’ve written previously that there’s no such thing as a neutral platform, at least algorithmically speaking (the link goes to my summary of a talk by Celeste Kidd, whose ideas I loosely adopted). There’s also no provision of neutrality required in 230, though this frequently gets lost in the debate surrounding it.
Ted Cruz is most notably guilty of misrepresenting Section 230. The following is taken from his questioning of Mark Zuckerberg in April 2018:
“The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment.”
Cruz is, of course, wrong. There is no predicate for being a “neutral public forum”—Facebook could choose to delete all right-wing content, or all content from people whose last names were Cruz, or anything else they wanted, and they would still be protected under the CDA.
That’s not the most interesting part to me—Ted Cruz is wrong all the time. Platforms like Facebook, Twitter, and YouTube are in hot water with both parties for different reasons.
Democrats seem to believe that they’re not moderating enough (by e.g., Facebook allowing false political ads, the rampant spread of misinformation on Twitter, and potential radicalization pathways on YouTube). Republicans often claim they’re censoring right-wing views.
Both, though, miss the core point: a platform cannot be neutral, at least when content is curated algorithmically. Two recent examples come to mind:
- Twitter choosing not to allow political ads is a decision that likely hurts less well-known candidates. Choosing to keep them would likely result in false advertising that misleads voters. Both are decisions with real impacts, and neither is “neutral.”
- No one seems to know what YouTube’s recommendation algorithm does, but whatever they find it can’t be neutral; any recommendation algorithm they use will implicitly assign value judgments to different types of content.
Put otherwise, it’s impossible for these platforms to achieve any standard of neutrality in the age of algorithmic content curation. Requiring the FTC to “certify” neutrality among big tech platforms, as Sen. Hawley proposed last year, would be an unenforceable and partisan disaster.
Moreover, those who are arguing in favor of increased moderation ought to defend Section 230. It’s certainly appealing to proclaim that Facebook should be responsible for the content posted there, but in practice it’s impossible for them moderate such an enormous platform. Repealing 230 (like Biden proposed!) would more likely result in Facebook abandoning all moderation efforts, so as to protect themselves from liability like CompuServe in 1991.
There are legitimate criticisms of these platforms, to be sure—the lack of visibility into and control over algorithmic feeds is my primary one—but conflating neutrality, transparency, and accountability misses the heart of the issue.
This is the output of a two-day deep dive into Section 230 that has been incredibly fascinating. I have to acknowledge my girlfriend Erica for pointing me to the original Verge article, which sent me down the rabbit hole. I have Kosseff’s book on my reading list now, and I hope to get to it soon.
The questions surrounding large tech platforms are as important as they have ever been. But to have any hope of combating the real issues that exist, we have to approach the problem from shared understanding: there is no such thing as a neutral platform. Any solutions to the (again, substantive)! issues with Section 230 must bear this in mind.