Free Speech
Part II: Senators Josh Hawley and Ted Cruz Want to Repeal Section 230 and Break the Internet

WASHINGTON, August 20, 2019 — Section 230 of the Communications Decency Act has been termed one of the most important and most misunderstood laws governing the internet.
In recent months, prominent critics from both sides of the aisle have called for the statute to either be repealed or altered so significantly that, if enacted, it would no longer serve its original purpose.
Sen Josh Hawley, R-Mo., introduced a bill that would eliminate Section 230 protections for big tech platforms unless they could prove their political neutrality to the Federal Trade Commission every two years. Sen. Ted Cruz, R-Texas, has called for the statute to be repealed altogether.
But any such proposal should first carefully consider Section 230’s unique role in the digital ecosystem.
The concept behind Section 230 has its origins in the First Amendment
The statute’s basic premise — protecting the rights of speakers by limiting the liability of third parties who enable them to reach an audience — is hardly new; the First Amendment has served that purpose for decades.
In the 1959 case Smith v. California, the Supreme Court ruled that booksellers could not be held liable for obscene content in the books being sold, because the resulting confusion and caution would lead to over-enforcement, or “censorship affecting the whole public.”
Five years later, the court ruled in New York Times Co v. Sullivan that failing to protect newspapers from liability for third party advertisements would discourage them from doing so, and therefore shut off “an important outlet for the promulgation of information and ideas by persons who do not themselves have access to publishing facilities.”
“In theory, the First Amendment — the global bellwether protection for free speech — should partially or substantially backfill any reductions in Section 230’s coverage,” wrote Eric Goldman, a law professor at Santa Clara University, in an April blog post. “In practice, the First Amendment does no such thing.”
In a paper titled “Why Section 230 Is Better Than the First Amendment,” Goldman explained some of the “significant and irreplaceable substantive and procedural benefits” that are unique to the controversial statute.
Section 230 has pragmatic applications for a range of legal claims
For one, Section 230 has pragmatic applications for defamation, negligence, deceptive trade practices, false advertising, intentional infliction of emotional distress, and dozens of other legal doctrines, some of which have little or no First Amendment defense.
In addition, Section 230 offers more procedural protections and greater legal certainty for defendants. It enables early dismissals, which can save smaller services from financial ruin. It is more predictable than the First Amendment for litigants. It preempts conflicting state laws and facilitates constitutional avoidance.
Most major tech platforms support Section 230, and experts widely agree that the internet would not have been able to develop without the protection of such a law.
“If we were held liable for everything that the users potentially posted…we fundamentally would not be able to exist,” said Jessica Ashooh, Reddit’s director of policy, at a July forum.
But also in July, one prominent tech company broke with the others to support a “reasonable care” standard like that proposed by Danielle Citron and Benjamin Wittes, law professors at the University of Maryland. IBM Executive Ryan Hagemann wrote in a blog post that this “would provide strong incentives for companies to limit illegal and illicit behavior online, while also being flexible enough to promote continued online innovation.”
Should online platforms be responsible for deleting objectively harmful content?
Companies should be held legally responsible for quickly identifying and deleting content such as child pornography or the promotion of mass violence or suicide, Hagemann continued. Adding this standard to Section 230 “would add a measure of legal responsibility to what many platforms are already doing voluntarily.”
But Goldman took a different tack. He strongly cautioned against proposals offering Section 230 protections only to defendants who were acting in so-called good faith, warning that “such amorphous eligibility standards would negate or completely eliminate Section 230’s procedural benefits.”
Hagemann, on the other hand, has defended the importance of a compromise-oriented middle ground. Current rhetoric from Congress suggests that changes to the statue are imminent, he said at a panel two weeks after IBM’s statement, and finding a compromise will prevent an extreme knee-jerk reaction from lawmakers who may not view the digital economy with the necessary nuance.
Senators Cruz and Hawley are gunning for effective repeal of Section 230
And as feared, members of Congress such as Cruz and Hawley have skipped right over compromise and started calling for the complete evisceration of Section 230.
Few would claim that Section 230 is perfect; it was written for a digital landscape that has since evolved in previously unimaginable ways. But allowing a body of five commissioners to determine the vague standard of “politically neutral” every two years would almost certainly lead to extreme inconsistency and partisanship.
Moreover, some fear that — contrary to Hawley’s stated intent — his bill might actually be the one thing that cements the major tech giants in their current place of power.
“Even if its initial application were limited to websites above a certain size threshold, that threshold would be inherently arbitrary and calls to lower it to cover more websites would be inevitable,” said TechFreedom President Berin Szóka.
Rather than keeping tech giants like Facebook and Google in check, conditioning Section 230 protections on perceived neutrality could actually benefit them by stifling any potential competition.
“At a time when we’re talking about antitrust investigations and we’re wondering if the biggest players are too big, the last thing we want to do is make a law that makes it harder for smaller companies to compete,” said Ashooh of Reddit.
“Admittedly, it feels strange to tout Section 230’s pro-competitive effect in light of the dominant marketplace positions of the current Internet giants, who acquired their dominant position in part due to Section 230 immunity,” wrote Goldman. “At the same time, it’s likely short-sighted to assume that the Internet industry has reached an immutable configuration of incumbents.”
Other articles in this series:
Section I: The Communications Decency Act is Born
Section II: How Section 230 Builds on and Supplements the First Amendment
Section III: What Does the Fairness Doctrine Have to Do With the Internet?
Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms
Free Speech
Additional Content Moderation for Section 230 Protection Risks Reducing Speech on Platforms: Judge
People will migrate from platforms with too stringent content moderation measures.

WASHINGTON, March 13, 2023 – Requiring companies to moderate more content as a condition of Section 230 legal liability protections runs the risk of alienating users from platforms and discouraging communications, argued a judge of the District of Columbia Court of Appeal last week.
“The criteria for deletion are vague and difficult to parse,” Douglas Ginsburg, a Ronald Reagan appointee, said at a Federalist Society event on Wednesday. “Some of the terms are inherently difficult to define and policing what qualifies as hate speech is often a subjective determination.”
“If content moderation became very rigorous, it is obvious that users would depart from platforms that wouldn’t run their stuff,” Ginsburg added. “And they will try to find more platforms out there that will give them a voice. So, we’ll have more fragmentation and even less communication.”
Ginsburg noted that the large technology platforms already moderate a massive amount of content, adding additional moderation would be fairly challenging.
“Twitter, YouTube and Facebook remove millions of posts and videos based on those criteria alone,” Ginsburg noted. “YouTube gets 500 hours of video uploaded every minute, 3000 minutes of video coming online every minute. So the task of moderating this is obviously very challenging.”
John Samples, a member of Meta’s Oversight Board – which provides direction for the company on content – suggested Thursday that out-of-court dispute institutions for content moderation may become the preferred method of settlement.
The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples.
“It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world, referring to the European Union’s Digital Services Act that went into effect in November of 2022, which requires platforms to remove illegal content and ensure that users can contest removal of their content.
Section 230
Section 230 Shuts Down Conversation on First Amendment, Panel Hears
The law prevents discussion on how the first amendment should be applied in a new age of technology, says expert.

WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.
Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms.
We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products.

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)
All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said.
Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation.
Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued.
Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said.
Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users.
It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said.
To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.
It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman.
This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.
To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!
Free Speech
Creating Institutions for Resolving Content Moderation Disputes Out-of-Court
Private institutions may become primary method for content moderation disputes, says expert.

WASHINGTON, March 9, 2023 – A member of Meta’s oversight board, John Samples, suggested that out-of-court dispute institutions for content moderation may become the preferred method of settlement in Broadband Breakfast’s Big Tech & Speech Summit Thursday.
Meta’s oversight board was created by the company to support free speech by upholding or reversing Facebook’s content moderation decisions. It works independently of the company and hosts 40 members around the world.
The European Union’s Digital Services Act, which came into force in November of 2022, requires platforms to remove illegal content and ensure that users can contest removal of their content. It clarifies that platforms are only liable for users’ unlawful behavior if they are aware of it and fail to remove it.
The Act specifies illegal speech to include speech that does harm to the electoral system, hate speech, and speech that harms fundamental rights. The appeals process allows citizens to go directly to the company, the national courts, or out-of-court dispute resolution institutions, none of which currently exist in Europe.
According to Samples, the Act opens the way for private organizations like the oversight board to play a part in moderation disputes. “Meta has a tremendous advantage here as a first mover,” said Samples, “and the model of the oversight board may well spread to Europe and perhaps other places.”
The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples. “It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world.
The private option of self-regulation has worked well, said Samples. “It may well be expanding throughout much of the world. If it goes to Europe, it could go throughout.”
Currently, of the media that Meta reviews for moderation, only one percent is restricted, either by taking down the content or reducing the size of the audience exposed to it, said Samples. The oversight board primarily rules against Meta’s decisions and accepts comments from independent interests.
To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!
-
Fiber3 weeks ago
‘Not a Great Product’: AT&T Not Looking to Invest Heavily in Fixed Wireless
-
Broadband Roundup2 weeks ago
AT&T Floats BEAD in USF Areas, Counties Concerned About FCC Map, Alabama’s $25M for Broadband
-
Big Tech3 weeks ago
House Innovation, Data, and Commerce Chairman Gus Bilirakis to Keynote Big Tech & Speech Summit
-
Big Tech2 weeks ago
Watch the Webinar of Big Tech & Speech Summit for $9 and Receive Our Breakfast Club Report
-
Big Tech2 weeks ago
Preview the Start of Broadband Breakfast’s Big Tech & Speech Summit
-
#broadbandlive2 weeks ago
Broadband Breakfast on March 8: A Status Update on Tribal Broadband
-
#broadbandlive1 week ago
Broadband Breakfast on March 22, 2023 – Robocalls, STIR/SHAKEN and the Future of Voice Telephony
-
Broadband Mapping & Data4 weeks ago
Tribal Ready Wants Better Broadband Data to Benefit Indian Country
1 Comment