The future of Section 230

891 words books,

What comes next for Section 230? What comes next for the platforms who have used Section 230 as a shield for so long?

Section 230 under fire

Both Presidential candidates dislike Section 230.

Near the end of May, Trump (once again) got angry at Twitter, this time for putting one of his tweets behind a warning label. On May 28, he issued an executive order hoping to limit the liability shield that Section 230 offers them. From the text of the order:

When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

This is unlikely to hold up in court. Subparagraph (c)(2)(A) is about the platforms operating in “good faith,” which is more or less impossible to test. And Trump being angry about Twitter not being a “neutral” platform is baseless; 230 doesn’t mention anything about neutrality.

Biden dislikes it too, though. The campaign is on record saying that 230 should be revoked, and he instead wants to “hold social media companies accountable.” From an interview with the New York Times:

And it should be revoked. It should be revoked because [Facebook] is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible. It’s totally irresponsible.

An outright repeal of Section 230 should simply be off the table. It’s not possible to do this in a way that isn’t utterly destructive to society.

Section 230 is the foundation on which the internet is built, and a foundation cannot be torn down without destroying everything that sits on top. How many websites would collapse? How many people would be out of work? How much would people’s lives be destroyed, in an age when more and more people have built their lives around the internet, were Facebook, Twitter, YouTube, and Reddit alone to stop existing tomorrow?

It’s about size

And it’s not about misinformation or hate speech, either. It’s not that Facebook and Twitter don’t take action against misinformation—they do, and dedicate immense resources to this. But Facebook isn’t propagating falsehoods “they know to be false;” to say so is to assume that Facebook has any control over the billions of messages being posted on their platform daily.

And you could reasonably argue that that’s precisely the problem—that Facebook doesn’t have control over their platform, that it’s gotten too big for any company or even government to be able to control. And I think I’d agree, but that’s not a question of misinformation or hate speech or content policies; it’s a question of size.

It is Facebook’s size, operating a social network at a scale that literally no one ever has before, that leads to all of these questions about what is and isn’t allowed on their platform. It is Facebook’s size that forces them to, as Casey Newton of The Verge points out, operate “a vast quasi-legal system” about what is and isn’t allowed.

This isn’t a debate about hate speech “being allowed”—it obviously isn’t, and to say otherwise is to bury your head in the sand and ignore Facebook’s actual actions, like removing 9.6 million pieces of hate speech in Q1 2020.

The power of platforms

Platforms are now in the national spotlight more than ever before, and I believe this is because people are starting to realize how much power they truly hold. Social media platforms have a reach that even government, in the absence of these platforms, doesn’t have. This is clear from the fact that politicians use social media; it’s where the people are.

The companies operating these social media enterprises, then, have a greater responsibility towards the public than companies have had before. They have a greater responsibility to combat misinformation than, say, a newspaper 50 years ago did.

Their size makes that hard to do, though. It’s impossible for Facebook to catch all the misinformation on their platforms. It’s impossible for YouTube to take down all the hateful videos. And anyone who thinks it is possible is underestimating the scale that these platforms work at; that is, a scale that no previous company has ever had to manage.

I think it’s also interesting to note that platforms have no legal responsibility to remove hate speech or misinformation. All of the major ones do, though; they’ve voluntarily created policies that restrict such content. Not doing so would make their platforms a wasteland, of course, and it’s unequivocally a good thing. But it’s not because of the law!

(Nor should it be; again, a law requiring that platforms remove misinformation would be impossible to follow.)

And so this leaves us in a weird place of hoping that companies’ sense of social responsibility (and a little bit response to market pressure—see the Facebook boycott) outweighs the legal default of being able to do whatever they want.