Section 230
Crackdown on Online Conspiracy Speakers After January 6 Highlights Need for Platform Accountability
WASHINGTON, January 14, 2021 – The power of big technology companies has been on display in the week since the insurrection of January 6, as platform companies have clamped down on speakers and web sites promoting violence and conspiracies, said panelists in a Broadband Breakfast Live Online event on Wednesday.
The implications for Section 230 of the Communications Decency Act were also addressed, as panelists discussed the kinds of protection and immunity from liability currently enjoyed by tech platforms both large and small.
Attorney Cathy Gellis expressed that we have a governance problem, not a technology problem.
That’s why, she said, solutions to the problem should not target particular technologies or policies that enable technologies, like Section 230.
Journalist Rob Pegoraro acknowledged the growing concern with the market power of tech companies, for example, in the ability of Apple and Google to block the smart phone app for Parler, the conservative social media tool, and of Amazon, to take Parler off the internet completely.
He and Jessica Dheere, director of the Ranking Digital Rights program at New America, said that tech companies have made rapid decisions when they should have been made gradually and incrementally. Instead, they were responding impulsively to the storming of the Capitol and associated social media chatter.
Dheere also said that tech companies should be transparent on how they are doing making these actions and be consistent on such decisions.
Panelists pointed out that Section 230 benefits not only large companies but small online players, too. Marginalized voices don’t have the same tools the president of the United States has. and platforms such as Twitter have been a big megaphone to many to speak up. It is really important to educate policymakers and staff; of all the impacts it can take on a variety of stakeholders.
Where would we be without Section 230?
Getting upset with the internet automatically makes people upset with Section 230, said Gellis in reply to a question from moderator Drew Clark, editor and publisher of Broadband Breakfast, about why Section 230 remains important.
That is not a problem with the Section 230, she said, even though it is an unusual law in that it is structured to incentivize good action as opposed to penalizing bad actions.
But changes to one part of the internet ecosystem can have negative impacts on another, said Ali Sternburg, senior policy counsel at the Computer and Communications Industry Association. She highlighted the need for a global standard and accountability and transparency to these platforms.
In offering up final thoughts in the one hour conversation, panelists – who were generally supportive of keeping Section 230, offering up suggestions to educate about the law, and to address other, related issues like privacy and competition.
Understand the problems and challenges before breaking through, said Gellis.
Wrecking the entire social media ecosystem will not resolve the problems, said Pegoraro.
Progress with these problem needs to look with scrutiny at these businesses models and algorithms. It does not include scrapping Section 230, but adding it to it, said Dheere.
Policy makers should read Section 230 before changing it, said Sternburg. Education continues to be important on this space and should come first before taking action to change the law.
Will Duffield, policy analyst at the Cato Institute, said that pushing de-favored speakers off mainstream tech platforms is not a static endpoint: Doing so will increase the creation of alternative and perhaps more decentralized infrastructure options. No single actor will be able to remove speakers from such a structure, he said.
Special Broadband Breakfast Live Online Town Hall on Section 230 on Wednesday, January 13, 2021
Events in the “Section 230: Separating Fact From Fiction” Series from July 2020 include:
- Event 1: Wednesday, July 1, 2020 — “Content Moderation: How it Works, Why it Works, and Best Practices”
- This panel will consider how different platforms approach content moderation, comparing reasons for a more active or more laissez-faire approach. It will consider what “best practices” have emerged for ensuring online diversity without permitting online harassment. It will also feature a discussion of how platforms moderate content in the U.S. versus internationally.
- Event 2: Wednesday, July 8, 2020 — “Section 230 in an Election Year: How Republicans and Democrats are Approaching Proposed Changes”
- Is Section 230 the new bugaboo of election years? Will life return to normal in 2021? This panel will explore the combination of forces that have made Section 230 susceptible to political pressure from both sides of the aisle.
- Event 3: Wednesday, July 15, 2020 — “Public Input on Platform Algorithms: The Role of Transparency and Feedback in Information Technology”
- This panel will consider what role governments have, or should have, in reacting to the power of tech platforms vis-à-vis their role in public discourse. It truly aims to consider the pros and cons of government and public involvement and engagement in pushing platforms to adopt greater transparency about the use of their algorithms.
SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTube, Twitter and Facebook.
See a complete list of upcoming and past Broadband Breakfast Live Online events.
Section 230
Section 230 Shuts Down Conversation on First Amendment, Panel Hears
The law prevents discussion on how the first amendment should be applied in a new age of technology, says expert.

WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.
Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms.
We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products.

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)
All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said.
Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation.
Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued.
Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said.
Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users.
It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said.
To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.
It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman.
This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.
To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!
Section 230
Congress Should Amend Section 230, Senate Subcommittee Hears
Experts urged Congress to amend tech protection law to limit protection for the promotion of harmful information.

WASHINGTON, March 8, 2023 – Law professionals at a Senate Subcommittee on Privacy, Technology and the Law hearing on Wednesday urged Congress to amend Section 230 to specify that it applies only to free speech, rather than the promotion of misinformation.
Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter. Mary Anne Franks, professor of law at the University of Miami School of Law, argued that there is a difference between protecting free speech and protecting information and the harmful dissemination of that information.
Hany Farid, professor at University of California, Berkley, argued that there should be a distinction between a negligently designed product feature and a core component to the platform’s business. For example, YouTube’s video recommendations is a product feature rather than an essential function as it is designed solely to maximize advertising revenue by keeping users on the platform, he said.
YouTube claims that the algorithm to recommend videos is unable to distinguish between two different videos. This, argued Farid, should be considered a negligently designed feature as YouTube knew or should have reasonably known that the feature could lead to harm.
Section 230, said Farid, was written to immunize tech companies from defamation litigation, not to immunize tech companies from any wrongdoing, including negligible design of its features.
“At a minimum,” said Franks, returning the statue to its original intention “would require amending the statute to make clear that the law’s protections only apply to speech and to make clear that platforms that knowingly promote harmful content are ineligible for immunity.”
In an State of the Net conference earlier this month, Frank emphasized the “good Samaritan” aspect of the law, claiming that it is supposed to “provide incentives at platforms to actually do the right thing.” Instead, the law does not incentivize platforms to moderate its content, she argued.
Jennifer Bennett of national litigation boutique Gupta Wessler suggested that Congress uphold what is known as the Henderson framework, which would hold a company liable if it materially contributes to what makes content unlawful, including the recommendation and dissemination of the content.
Unfortunately, lamented Eric Schnapper, professor of law at University of Washington School of Law, Section 230 has barred the right of Americans to get redress if they’ve been harmed by big tech. “Absolute immunity breeds absolute irresponsibility,” he said.
Senator Richard Blumenthal, R-Connecticut, warned tech companies that “reform is coming” at the onset of the hearing.
This comes weeks after the Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube. The case saw industry dissention on whether section 230 protects algorithmic recommendations. Justice Brett Kavanaugh claimed that YouTube forfeited its protection by using recommendation algorithms but was overturned in the court ruling.
Premium
Content Moderation, Section 230 and the Future of Online Speech
Our comprehensive report examines the extremely timely issue of content moderation and Section 230 from multiple angles.

In the 27 years since the so-called “26 words that created the internet” became law, rapid technological developments and sharp partisan divides have fueled increasingly complex content moderation dilemmas.
Earlier this year, the Supreme Court tackled Section 230 for the first time through a pair of cases regarding platform liability for hosting and promoting terrorist content. In addition to the court’s ongoing deliberations, Section 230—which protects online intermediaries from liability for third-party content—has recently come under attack from Congress, the White House and multiple state legislatures.
Members of the Breakfast Club also have access to high-resolution videos from the Big Tech & Speech Summit!
Member download, or join with Free 30-Day Trial!
-
Fiber3 weeks ago
‘Not a Great Product’: AT&T Not Looking to Invest Heavily in Fixed Wireless
-
Broadband Roundup2 weeks ago
AT&T Floats BEAD in USF Areas, Counties Concerned About FCC Map, Alabama’s $25M for Broadband
-
Big Tech3 weeks ago
House Innovation, Data, and Commerce Chairman Gus Bilirakis to Keynote Big Tech & Speech Summit
-
Big Tech2 weeks ago
Watch the Webinar of Big Tech & Speech Summit for $9 and Receive Our Breakfast Club Report
-
Big Tech2 weeks ago
Preview the Start of Broadband Breakfast’s Big Tech & Speech Summit
-
#broadbandlive2 weeks ago
Broadband Breakfast on March 8: A Status Update on Tribal Broadband
-
WISP4 weeks ago
Starry Group Files for Chapter 11 Bankruptcy
-
Broadband's Impact4 weeks ago
Community Engagement is Key to BEAD Grant Planning Process, Experts Say