EVERYTHING YOU NEED TO KNOW ABOUT SECTION 230 The most important law for online speech

EVERYTHING YOU NEED TO KNOW ABOUT SECTION 230 The most important law for online speech


This is a living guide to Section 230: what it is, what it isn’t, why it’s controversial, and how it might be changed. This guide will be updated as events warrant.


Section 230 of the Communications Decency Act, which was passed in 1996, says an “interactive computer service” can’t be treated as the publisher or speaker of third-party content. This protects websites from lawsuits if a user posts something illegal, although there are exceptions for copyright violations, sex work-related material, and violations of federal criminal law.

Sen. Ron Wyden (D-OR) and Rep. Chris Cox (R-CA) crafted Section 230 so website owners could moderate sites without worrying about legal liability. The law is particularly vital for social media networks, but it covers many sites and services, including news outlets with comment sections — like The Verge. The Electronic Frontier Foundation calls it “the most important law protecting internet speech.”

It’s increasingly controversial and frequently misinterpreted, however. Critics argue that its broad protections let powerful companies ignore real harm to users. On the other hand, some lawmakers incorrectly claim that it only protects “neutral platforms” — a term that’s irrelevant to the law.

Similar legislation exists in the European Union and Australia.



In the United States, the First Amendment prohibits the government from restricting most forms of speech, which would include many proposals to force tech companies to moderate content. A law that required companies to moderate content based on the political viewpoint it expresses, for example, would likely be struck down as unconstitutional.

But private companies can create rules to restrict speech if they so choose. This is why Facebook and Twitter ban hate speech, for example, even though it is permitted under the First Amendment.

This issue is distinct from discussions over whether platforms should be liable for what their users post, though it often gets lumped in with the 230 discussion.


In August 2019, President Donald Trump reportedly drafted an executive order that would require the Federal Communications Commission to develop rules that could limit Section 230 protections. Met with confusion from regulators and legal experts, the White House seemed to lose interest in the order, and it was tabled until May 2020 when a feud with Twitter brought the order back into active consideration.

In basic terms, the order provides a pathway for regulators to strip platforms of the protections granted by Section 230. Specifically, users would be directed to file complaints of bias with the Federal Trade Commission, and the FCC would follow up on those complaints to see if they justify removing a platform’s “good faith” provision under the law. More broadly, the order takes significant liberties in how it interprets the text of the law and orders agencies to follow that interpretation, rather than the interpretations offered by the courts or by Congress.

It’s unclear how all of that will hold up in court, but it has sparked significant new interest in modifying the law from Republicans in Congress.


In April 2018, Trump signed into law the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), a bill that purports to fight sex trafficking by reducing legal protections for online platforms. (It’s also sometimes referred to as the Stop Enabling Sex Traffickers Act, or SESTA, after an earlier version of the bill.)

FOSTA carves out a new exception to Section 230, stating that Section 230 doesn’t apply to civil and criminal charges of sex trafficking or to conduct that “promotes or facilitates prostitution.” The rule applies retroactively to sites that violate it.


Following the passage of the bills, websites began to censor parts of their platforms — not because they were currently hosting prostitution ads, but because of the faint possibility that some third party could do so in the future. The laws are why Craigslist no longer has a Personals section. Now, sex workers say that they have broadly been forced offline, making their work far less safe. Prostitution-related crime in San Francisco alone — including violence against workers — more than tripled.

Democrats have called for a study of the harms created for sex workers by the law. There is little to no evidence that the law has had much of an effect on reducing online sex trafficking.


In February 2020, the US Department of Justice held a day-long workshop to discuss ways in which Section 230 could be further amended. They’re examining cases in which platforms have enabled the distribution of nonconsensual pornography, harassment, and child sexual abuse imagery.

Proposals to reform the law generally fall into two categories. One is a “carveout” approach that removes protections from certain categories of content — like FOSTA-SESTA did for sex work-related material. The other is a “bargaining chip” system that ties liability protection to meeting certain standards — like the proposed Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT), which, as its name suggests, would make sites demonstrate that they are fighting child sex abuse. (This would likely have the intended side effect of weakening encryption for private messaging.) This approach is often bundled with broader data privacy and tech regulation proposals, which are covered in more detail in a separate guide.


To date, legislators have paid less attention to online marketplaces like Airbnb, which also benefits from the liability shield created by Section 230.


Democrats have largely been concerned with getting platforms to remove more content because of the harms associated with hate speech, terrorism, and harassment.

In January 2020, former Vice President Joe Biden proposed revoking Section 230 completely. “The idea that it’s a tech company is that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms,” Biden said. “It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false.” Biden never responded to follow-up questions about this statement.

Vox asked several leading Democratic candidates to weigh in on Section 230 in December 2019. Sen. Bernie Sanders (I-VT) said, “Tech giants and online platforms should not be shielded from responsibility when they knowingly allow content on their platforms that promotes and facilitates violence.”

In August 2019, former presidential candidate Beto O’Rourke proposed amending Section 230 to make it easier to sue big tech platforms if they failed to remove hate speech and terrorist content.


Republicans have largely been concerned with getting platforms to remove less content over fears that tech companies will prevent them from reaching their audiences.

Republicans, including Sens. Josh Hawley (R-MO) and Ted Cruz (R-TX) and Rep. Paul Gosar (R-AZ), have popularized changes to the law, typically over claims that platforms are censoring conservative viewpoints. Members of Congress have pointed to specific examples in which posts were removed or accounts were temporarily suspended, but there is no evidence that those actions were taken out of an ideological bias. (In fact, Fox News’ Facebook page has been No. 1 in monthly engagement for the entire platform.)

There is no evidence of systematic censorship of any political ideology on a tech platform, but Gosar’s Stop the Censorship Act sought to prevent platforms from removing content that they found “objectionable.” That would mean they could only remove posts that violated the law.

Meanwhile, Hawley’s Ending Support for Internet Censorship Act would have required platforms’ content moderation teams to certified as politically “neutral” by a bipartisan panel in order to retain their liability protections.

Neither proposal has so far advanced. Republicans are also behind the EARN IT Act described above.


Among tech platforms, Facebook has led the call for more regulation. In February 2020, CEO Mark Zuckerberg said the company ought to be regulated as something in between a telecommunications company and a newspaper. That same day, Facebook released a white paper laying out the approach it would prefer regulators take.

The approach rests on a handful of core assumptions: that platforms are global and thus subject to many different laws and competing cultural values; that they are intermediaries for speech rather than traditional publishers; that they will change constantly for competitive reasons; and that they will always get some moderation decisions wrong. (There’s another assumption buried in that last one: that they will never hire enough people to screen content in advance or in real time.)

Facebook argues that the government could hold tech platforms accountable for certain key metrics: holding violating posts below a certain number of views, for example, or setting a mandatory median response time for removing them. But they note that any of these efforts could create perverse incentives. If platforms are required to remove certain posts within 24 hours, for example, they are likely to simply stop looking at older posts while they focus on posts that are still within the 24-hour window.


Section 230 reform may continue to play a role in the 2020 campaign. Sen. Bernie Sanders has said he would reexamine Section 230 if he is elected president, and Trump has previously signaled at least some desire to modify the law.

Section 230 will probably be modified again. The big questions are when — and how.

Original published at www.theverge.com

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.