Section 230: The Tiny Clause that Rules the Internet

Section 230 is a tiny clause in a massive bill but it determines every online experience you've ever had.
Long read

The interminable battle between “free speech” and liability for illegal online activities often winds up with lawmakers, journalists, academics, and everyone in between talking about Section 230.

It’s rare that a single subsection of legislation receives so much attention and is debated with so much vigor and, often, venom. But this single section of the Communications Decency Act 1996 determines how every single one of us interacts with the internet on a day-to-day basis.

However, Republicans and Democrats alike are looking to repeal Section 230 — the former in the name of “free speech” and the latter to enforce stronger rules on social media giants.

Is the internet about to undergo an irrevocable change in the name of political point-scoring?

Section 230: How Did We Get Here?

Written way back in 1996 when AOL was the biggest website in the world, and a year before Google.com was registered as a domain, two US Senators wrote a bill that would create the internet we know today.

“Ron Wyden and Chris Cox,” says Cris Pikes, CEO of Image Analyzer, a visual content moderation company, “had the goal of combatting online pornography, while protecting online creativity and free speech.”

However, with Pornhub attracting more monthly users than Netflix, Amazon, and Reddit, Wyden and Cox’s bill failed on its first objective.

“Wyden described Section 230 as providing a ‘shield and a sword,” continues Pikes. “The law provides a ‘shield’ because it protects social media platforms and interactive website operators from legal liability for third-party content posted to their sites.

“However, the law also allows online service providers to remove any content that flouts their own acceptable use rules: the ‘sword’. But Section 230 does not compel providers to moderate content in this way.”

This legal limbo is complicated further by the “Good Samaritan” portion of the law. “Online service operators are viewed as distributors of content, rather than publishers, and are therefore not held legally responsible for third-party content uploaded to their site,” explains Pikes. “This is a defense often used by Facebook, Google, and Twitter when called to account for failure to swiftly remove hate speech, disinformation, and harmful videos posted by users.”

The Communications Decency Act runs to well over 100 pages but Section 230 makes up less than 700 words of the behemoth bill. At the time, Wyden and Cox could barely have foreseen what the internet would become today.

Backpage and Salesforce – Who Controls What?

One of the most high-profile recent controversies around Section 230 involves Backpage.com, a classified ad website that became a hub for buying and selling sex in the US — which is illegal across the country save for some counties in Nevada.

Backpage.com itself was seized by the FBI back in 2018 but the saga rumbles on largely due to the role of Salesforce, a customer relationship management software giant, in the site’s operation.

The case against Backpage.com and its CEO, Carl Ferrer, was pretty cut-and-dried. In April 2018 he pled guilty to conspiracy to facilitate prostitution and money laundering. His website was a hub for child sex trafficking in America.

However, a judge in Texas ruled last month that Salesforce should face trial as well — alleging that the company’s CRM software was used to “actively obtain and monitor data and additional information related to pimps and sex traffickers that were using Backpage.”

Salesforce started supplying software to Backpage.com in 2013 and the plaintiffs allege that the CRM giant “was in a position to learn, and in fact did learn, about illegal business practices of Backpage… [and] armed with this knowledge Salesforce chose to financially benefit by doing business with Backpage.”

Salesforce wasn’t responsible for the content that was posted on Backpage but, by supplying the tools to carry out illegal activity, are they still liable?

“Website owners can decide who they allow to post content and have editorial control over all of the content on their website,” says Allan Dunlavy, Partner at reputation and privacy consultancy Schillings. “Given this, website owners should be held responsible to use this editorial control to ensure that the content on their site isn’t illegal or otherwise harmful.”

“The reality is that any website hosting content is publishing it,” Dunlavy continues. “Even a website such as Craigslist or Backpage that is hosting classified-type adverts needs to be responsible for what they are publishing.”

However, when it comes to Salesforce’s role, the situation is murkier.

“B2B software providers like Salesforce are very different from publicly available websites that host material,” explains Dunlavy. “Salesforce, as an example, licenses CRM software to businesses for their private use and the data and content on those services is not publicly available. Salesforce has no editorial control over the content. They cannot review and approve before publication, they cannot check it after publication and they cannot remove it.”

“Salesforce has no editorial control over the content. They cannot review and approve before publication, they cannot check it after publication and they cannot remove it.”

“In a similar vein,” he continues, “Microsoft licenses software to users for their private use and has no editorial control at all over what someone may type into a Word document that they are using as part of an Office license. As such, if the content put into Word is illegal or harmful then the user, rather than Microsoft, should be liable as they have the editorial control.”

“If the content put into Word is illegal or harmful then the user, rather than Microsoft, should be liable as they have the editorial control.”

“Public cloud providers and platform operators are caught in a legal limbo between public opinion which poses an existential threat and technology laws that are no longer fit for purpose,” says Pikes.

However, if viewed in a broader sense, the normal, legal companies which, unwittingly or otherwise could be seen to exercise a semblance of editorial control.

“Within twenty-four hours of the El Paso massacre that took place on 4th August 2019,” says Pikes, “[internet security company] Cloudflare withdrew its services from 8Chan because it had been used to share extremist manifestos uploaded by those accused of carrying out the attack.”

“Stripped of Cloudflare’s protection from distributed denial of service attacks,” he continues, “8Chan was taken offline within a matter of hours. Similarly, in January 2021, following the fatal insurrection at Capitol Hill, Amazon Web Services withdrew its cloud hosting services from right-wing free speech forum, Parler, forcing it offline.”

If the plaintiffs are correct and Salesforce was aware of the illegal activities its technology was used for on Backpage.com, then surely Salesforce could have pulled the plug? It certainly would have saved it some face.

Salesforce’s lawyers would argue that the CRM platform cannot dictate what its services are used for by customers. Is Walmart liable for the guns and ammunition it sells if those products are used in a terrorist attack?

Should Section 230 be Repealed?

There is broad consensus that Section 230 is no longer fit for its purpose. That isn’t anyone’s fault — the internet was a very different place in 1996.

“Repealing Section 230 would undoubtedly make the internet more accountable,” says Dunlavy, “and in my view, this is not only a good idea – it is imperative to the well-being of our societies and democracy.”

“It is almost impossible to hold any US-based website liable for any content that it is publishing on its website — including content that it allows third parties to post and continues to publish,” continues Dunlavy. “Anyone seeking to assert their legal rights are forced to try to go against the underlying ‘content provider”’ i.e. the user. However in the vast majority of cases, the user is anonymous and so it is in practical terms often completely impossible for someone to assert their rights – e.g. to not being harassed or bullied – and such they are in reality being denied their legal rights.”

However, repealing Section 230 is, in the minds of some to be missing the point.

“Repealing Section 230 is an overly broad measure that would eliminate freedom of speech on the internet by shooting the messenger,” says Paul Bischoff, privacy advocate at Comparitech. “Advocates of repeal argue that doing so would help prevent misinformation, libel, and incitement by making internet companies liable for content posted by users on their platforms.”

Repeal “would also force internet companies to manually pre-screen all user-generated content — videos, classified ads, social media posts, advertisements, etc. — before being made public. Given the huge volume of user content put online every day, this just isn’t feasible,” continues Bischoff. “And even if it were, it would result in internet companies censoring a huge amount of content in order to minimize the risk of liability.”

For some, however, these arguments simply don’t cut the mustard.

“Free speech is the cornerstone of democracy and it is crucial that this is protected,” counters Dunlavy. “With this right comes the responsibility to use the right in a reasonable and responsible manner. Free speech does not mean that anyone can be bullied, abused, or harassed without any consequences or often even knowing who is responsible.”

Of course, if simply repealing Section 230 was the silver bullet for a better internet, it would have been repealed a long time ago.

There’s also no guarantee that a post-Section 230 internet would make the situation any better. There are currently two bipartisan bills addressing Section 230 being debated in the US.

There’s the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act or, more simply, the EARNIT Act, which is being pushed by Senators Lindsey Graham (R-S.C) and Richard Blumenthal (D-Conn.). While the Platform Accountability and Consumer Transparency, or PACT Act. PACT is being pushed by Senators John Thune (R-S.D.) and Brian Schatz (D-Hawaii).

The EARNIT act is ostensibly designed to combat child sexual exploitation online. However, has been widely criticized as a veiled attack on encryption software.

Child predators “communicate using virtually unbreakable encryption,” US attorney general William Barr said during the press conference which announced the bill.

However, everyone also communicates with tools using end-to-end encryption. By removing this technology, it could, theoretically, give the US government absolute and unending surveillance over the lives of its people, as well as citizens from all over the world who use US-based services.

The PACT Act, on the other hand, will legally mandate companies to moderate all of their content and make them liable for the content they host. The Electronic Frontier Foundation has said that the act has a number of problems:

“The bill’s modifications of Section 230 will lead to greater, legally required online censorship, likely harming disempowered communities and disfavored speakers. It will also imperil the existence of small platforms and new entrants trying to compete with Facebook, Twitter, and YouTube by saddling them with burdensome content moderation practices and increasing the likelihood that they will be dragged through expensive litigation. The bill also has several First Amendment problems because it compels online services to speak and interferes with their rights to decide for themselves when and how to moderate the content their users’ post.”

Thinking that any bill could potentially solve all of the internet’s problems is, frankly, ridiculous. Thinking that any bill could keep everyone happy is even more outlandish. The problems posed by the information superhighway head right to the heart of America’s culture wars.

Is Section 230 Too Politicized?

“Eliminating Section 230 appears to be one of the few legislative agenda items upon which those on the political left and those on the political right in the US agree,” according to Dunlavy. “However, they want to eliminate the provision for completely opposite reasons.”

Right-wingers typically believe that internet companies are trying to censor their viewpoints — in direct contradiction of the First Amendment. According to Dunlavy, these arguments are misplaced.

“The First Amendment only prevents the Government from limiting an individual’s free speech. It does not concern private companies like tech companies. So by comparison, if you own a bar and someone in the bar is shouting racist and abusive things, then you – as the owner of the bar – have the right to throw that person out of your bar.”

Those on the left, meanwhile, believe that those same companies are complicit in their inaction to stop online misinformation and abuse.

There appears to be no middle ground between the two sides and Section 230 will remain a political football for the foreseeable future.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Tom Fogden is a writer for Tech.co with a range of experience in the world of tech publishing. Tom covers everything from cybersecurity, to social media, website builders, and point of sale software when he's not reviewing the latest phones.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today