Connect with us

Section 230

Companies May Hesitate Bringing Section 230 Arguments in Court Fearing Political Ramifications: Lawyers

Legal experts say changing views on Section 230 will make platforms less willing to employ that defense in future cases.



Carrie Goldberg, founder of C.A. Goldberg law firm

July 14, 2021—Legal experts are speculating that companies may shy away from testing Section 230 arguments in future court cases because recent legal decisions against the defense could influence political action on amending the intermediary liability provision.

Section 230 of the Communications Decency Act offers online platforms immunity from civil liability based on content their users post on their websites. But recent decisions by various courts that have ruled against the companies’ Section 230 defenses and held them liable for incidents could have a lasting effect on how companies approach these cases.

“People are being a lot more thoughtful when they use a 230 defense, and sometimes not using one at all, because they realize that that just won’t bode well for their future cases,” Michele Lee, assistant general counsel and the head of litigation at social media company Pinterest, said at a conference hosted by the Federal Communications Bar Association on Tuesday.

“The number of companies that operate within this space, frankly, aren’t that many. And I think people are thinking much more long term than just the cases that are in front of them.”

Legal experts at the conference argued that firms would be increasingly selective about what cases they elect to employ for a Section 230 defense. The more attention it receives, they argue, the more likely it is to receive political attention, which could reignite discussion about its reform.

Debate about what to do with Section 230 has enamored Capitol Hill for many months, with the climax of discussions occurring after former President Donald Trump was banned from several platforms at the start of the year for comments he made on the services that allegedly stoked the Capitol riot on January 6.

Since then, several proposed amendments were put forth, including from Sen. Amy Klobuchar, D-Minnesota, who proposed to keep Section 230 protections largely the same except for paid content.

And last month, Sen. Marco Rubio, R-Florida, introduced his own proposed legislation, which would “halt Big Tech’s censorship of Americans, defend free speech on the internet, and level the playing field to remove unfair protections that shied massive Silicon Valley firms from accountability.”

Legal precedent and policy: two vehicles for change

The concern for companies that provide platforms for the flow of information is that they could lose certain liability protections through legislation or a change in precedent. Historically, those protections did take up much mental real estate for Congresspeople, the White House and is often held up in court.

But that tide may be shifting.

In May, the court ruled against the popular messaging company Snapchat’s Section 230 defense, claiming that it could be held civilly liable because it had created a dangerous product following the death of a 20-year-old Snapchat user who crashed his car in 2020 while using a filter on the app that rewarded fast driving.

Reaching 120 miles-per-hour at one point, the crash also killed two teenage passengers. Two of the victims’ parents sued Snapchat for wrongful death, claiming that the reward system on that filter encouraged reckless driving.

The case was thrown out of court on Section 230 grounds, but the Ninth Circuit Appeals Court revived the case, reversing the ruling and favoring the victims, holding Snapchat liable for creating an inherently dangerous product.

Carrie Goldberg, founder of C.A. Goldberg, a victims’ rights law firm, said Tuesday that this ruling offers a “small window of online platform accountability,” in which platforms might be held liable for published content when that content demonstrates a harm to the public.

Goldberg referenced another case out of Texas last month, where the state’s supreme court ruled that Facebook could be held liable after three plaintiffs filed separate suits against the company, alleging that they became victims of sex trafficking, being lured in through people they met on Facebook and Instagram.

Facebook claimed immunity through Section 230, but the court sided with the plaintiffs, saying the provision does not “create a lawless no-man’s-land on the Internet.” The court made a further clarification that Section 230 protects online platforms from the words or actions of others, but “[h]olding internet platforms accountable for their own misdeeds is quite another thing.”

This particular case may only be applicable to Texas jurisdiction, however, and hold little impact for the rest of the country, as part of the case was fought using a Texas-specific statute that allows civil lawsuits “against those who intentionally or knowingly benefit from participation in a sex-trafficking venture.”

In May, observers noted that a number of these legal decisions reversing course on Section 230 matters could lead to a floodgate of other lawsuits across the country.

Reporter Tyler Perkins studied rhetoric and English literature, and also economics and mathematics, at the University of Utah. Although he grew up in and never left the West (both Oregon and Utah) until recently, he intends to study law and build a career on the East Coast. In his free time, he enjoys reading excellent literature and playing poor golf.

Section 230

Section 230 Shuts Down Conversation on First Amendment, Panel Hears

The law prevents discussion on how the first amendment should be applied in a new age of technology, says expert.



Photo of Ron Yokubaitis of, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.  

Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms. 

We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products. 

Photo of Ron Yokubaitis of, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said. 

Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation. 

Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued. 

Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said. 

Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users. 

It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said. 

To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.  

It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman. 

This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.   

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Continue Reading

Section 230

Congress Should Amend Section 230, Senate Subcommittee Hears

Experts urged Congress to amend tech protection law to limit protection for the promotion of harmful information.



Photo of Hany Farid, professor at University of California, Berkley

WASHINGTON, March 8, 2023 – Law professionals at a Senate Subcommittee on Privacy, Technology and the Law hearing on Wednesday urged Congress to amend Section 230 to specify that it applies only to free speech, rather than the promotion of misinformation.

Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter. Mary Anne Franks, professor of law at the University of Miami School of Law, argued that there is a difference between protecting free speech and protecting information and the harmful dissemination of that information.

Hany Farid, professor at University of California, Berkley, argued that there should be a distinction between a negligently designed product feature and a core component to the platform’s business. For example, YouTube’s video recommendations is a product feature rather than an essential function as it is designed solely to maximize advertising revenue by keeping users on the platform, he said.

YouTube claims that the algorithm to recommend videos is unable to distinguish between two different videos. This, argued Farid, should be considered a negligently designed feature as YouTube knew or should have reasonably known that the feature could lead to harm.

Section 230, said Farid, was written to immunize tech companies from defamation litigation, not to immunize tech companies from any wrongdoing, including negligible design of its features.

“At a minimum,” said Franks, returning the statue to its original intention “would require amending the statute to make clear that the law’s protections only apply to speech and to make clear that platforms that knowingly promote harmful content are ineligible for immunity.”

In an State of the Net conference earlier this month, Frank emphasized the “good Samaritan” aspect of the law, claiming that it is supposed to “provide incentives at platforms to actually do the right thing.” Instead, the law does not incentivize platforms to moderate its content, she argued.

Jennifer Bennett of national litigation boutique Gupta Wessler suggested that Congress uphold what is known as the Henderson framework, which would hold a company liable if it materially contributes to what makes content unlawful, including the recommendation and dissemination of the content.

Unfortunately, lamented Eric Schnapper, professor of law at University of Washington School of Law, Section 230 has barred the right of Americans to get redress if they’ve been harmed by big tech. “Absolute immunity breeds absolute irresponsibility,” he said.

Senator Richard Blumenthal, R-Connecticut, warned tech companies that “reform is coming” at the onset of the hearing.

This comes weeks after the Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube. The case saw industry dissention on whether section 230 protects algorithmic recommendations. Justice Brett Kavanaugh claimed that YouTube forfeited its protection by using recommendation algorithms but was overturned in the court ruling.

Continue Reading


Content Moderation, Section 230 and the Future of Online Speech

Our comprehensive report examines the extremely timely issue of content moderation and Section 230 from multiple angles.



In the 27 years since the so-called “26 words that created the internet” became law, rapid technological developments and sharp partisan divides have fueled increasingly complex content moderation dilemmas.

Earlier this year, the Supreme Court tackled Section 230 for the first time through a pair of cases regarding platform liability for hosting and promoting terrorist content. In addition to the court’s ongoing deliberations, Section 230—which protects online intermediaries from liability for third-party content—has recently come under attack from Congress, the White House and multiple state legislatures.

Member download, or join with Free 30-Day Trial!

Continue Reading

Signup for Broadband Breakfast News

Broadband Breakfast Research Partner