Weekend Long Reads: Section 230

by Kevin Schofield

One of the frequent targets of President Donald Trump’s ire lately has been the cryptic “Section 230.” Last month, Trump threatened to veto the entire budget for the nation’s military forces if Congress didn’t include a repeal of Section 230 in the bill. This week’s “long read” is a deep dive into Section 230, its origins, and the ongoing controversies it begets that extend far beyond Trump’s personal vendetta.

Section 230, known by its full designation as 47 U.S.C. 230, is a small section of federal law that was passed as part of the 1996 Communications Decency Act. Ironically, courts have tossed much of the CDA as unconstitutional since then; Section 230 is one of the few scraps that have survived. It sets out the rules for providers of services on the Internet — and particularly services such as social media that let individuals post their own content — as to the providers’ liability for what their users post to their sites.  It followed a string of court rulings in the latter decades of the 20th century finding that online bulletin boards and other social media that simply hosted content but were unaware of the content that their users were posting were not liable for that content, even if it was offensive, illegal or otherwise objectionable. On one hand, this was a good thing because it encouraged the creation of new platforms for free speech; but on the other hand, it had the perverse effect of discouraging service providers from doing any kind of content moderation — and in many cases from even looking at the content on their own site. The less they knew what was going on, the more protected they were.

Section 230 was written to thread that needle between two conflicting goals: to provide incentives for service providers to do reasonable moderation of content on their sites, while still shielding them from liability for their users’ content.  Part (c) gets to the crux of the matter:

(c) Protection for “Good Samaritan” blocking and screening of offensive material 

(1) Treatment of publisher or speaker 

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability 

No provider or user of an interactive computer service shall be held liable on account of— 

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

An imperfect mechanism, to be sure, but a good attempt in an imperfect time when the Internet was evolving rapidly. But there were a couple of things that the legislators at the time didn’t anticipate.

First, they didn’t expect the huge popularity of social media sites such as Facebook and Twitter, and that those and other social media sites would become the dominant platforms for social discourse as newspapers, civic groups, and other social organizations declined — placing the private organizations that run the sites in control of enormous portions of the country’s social and political discourse. 

Second, they didn’t anticipate that conservative politicians and pundits would play the “victim card” and claim that social media sites had a liberal bias (for which there is no credible evidence). 

The first issue has become a significant one for researchers, public policy advocates, and First Amendment advocacy groups concerned that private industry moguls control too much of the public square. The second issue is the heart of Trump’s grievance against Section 230: despite his enormous following on Twitter and Facebook (before he was banned last week), he constantly complained of bias whenever either site slapped a warning label on one of his more egregious lies or provocations. In fact, last May Trump issued an executive order expressing his belief in the conservative myth that social media has a liberal bias, and setting up his late-term push to have Section 230 revoked. Ironically, revoking Section 230 would probably have led to Twitter and Facebook taking an even heavier hand in moderating his posts, since it’s implausible to argue that the sites were completely unaware of what the President of the United States was posting.

In 2018 Harvard Law Review posted an article by Kate Klonick titled “The New Governors” on Section 230, its history, the raging debate about how to think about Twitter and Facebook’s power over public speech, and options for where we go from here. It’s a fascinating read and dives into many of the most head-scratching questions raised by the state of things today. Among them: if social media sites have protection from liability for content posted by their users (and they do), then why do they still choose to moderate content on their sites?

It also demystifies exactly how Twitter, Facebook, and other social media sites perform their moderation functions. The lack of transparency on exactly how that works is one of the reasons so many people are uncomfortable today with the power that the companies have amassed to remove posts and ban individuals without any formal appeals process.

It stops short of arguing that the First Amendment should apply to social media companies, but it does suggest that there ought to be “technological due process” requirements that the companies need to follow, which allow us to understand what’s really going on behind the scenes in their content moderation departments and to challenge content moderation when we believe it is inappropriate.

As a country and society, we’re due for a hard conversation about Section 230 and the role of private companies as the gatekeepers for public discourse. Social media isn’t going away, but we need to find a way to make it work for us.

The New Governors: the people, process, and rules governing online speech

Kevin Schofield is a freelance writer and the founder of Seattle City Council Insight, a website providing independent news and analysis of the Seattle City Council and City Hall. He also co-hosts the “Seattle News, Views and Brews” podcast with Brian Callanan, and appears from time to time on Converge Media and KUOW’s Week in Review.

Before getting into journalism, Kevin worked at Microsoft for 26 years, including 17 in the company’s research division. He has twin daughters, loves to cook, and is trying hard to learn Spanish and the guitar.

The featured image belongs to the Public Domain