Connect with us

Free Speech

Part II: Senators Josh Hawley and Ted Cruz Want to Repeal Section 230 and Break the Internet

Published

on

Photo of Reddit Director of Policy Jessica Ashooh courtesy of Misk Global Forum

WASHINGTON, August 20, 2019 — Section 230 of the Communications Decency Act has been termed one of the most important and most misunderstood laws governing the internet.

In recent months, prominent critics from both sides of the aisle have called for the statute to either be repealed or altered so significantly that, if enacted, it would no longer serve its original purpose.

Sen Josh Hawley, R-Mo., introduced a bill that would eliminate Section 230 protections for big tech platforms unless they could prove their political neutrality to the Federal Trade Commission every two years. Sen. Ted Cruz, R-Texas, has called for the statute to be repealed altogether.

But any such proposal should first carefully consider Section 230’s unique role in the digital ecosystem.

The concept behind Section 230 has its origins in the First Amendment

The statute’s basic premise — protecting the rights of speakers by limiting the liability of third parties who enable them to reach an audience — is hardly new; the First Amendment has served that purpose for decades.

In the 1959 case Smith v. California, the Supreme Court ruled that booksellers could not be held liable for obscene content in the books being sold, because the resulting confusion and caution would lead to over-enforcement, or “censorship affecting the whole public.”

Five years later, the court ruled in New York Times Co v. Sullivan that failing to protect newspapers from liability for third party advertisements would discourage them from doing so, and therefore shut off “an important outlet for the promulgation of information and ideas by persons who do not themselves have access to publishing facilities.”

“In theory, the First Amendment — the global bellwether protection for free speech — should partially or substantially backfill any reductions in Section 230’s coverage,” wrote Eric Goldman, a law professor at Santa Clara University, in an April blog post. “In practice, the First Amendment does no such thing.”

In a paper titled “Why Section 230 Is Better Than the First Amendment,” Goldman explained some of the “significant and irreplaceable substantive and procedural benefits” that are unique to the controversial statute.

Section 230 has pragmatic applications for a range of legal claims

For one, Section 230 has pragmatic applications for defamation, negligence, deceptive trade practices, false advertising, intentional infliction of emotional distress, and dozens of other legal doctrines, some of which have little or no First Amendment defense.

In addition, Section 230 offers more procedural protections and greater legal certainty for defendants. It enables early dismissals, which can save smaller services from financial ruin. It is more predictable than the First Amendment for litigants. It preempts conflicting state laws and facilitates constitutional avoidance.

Most major tech platforms support Section 230, and experts widely agree that the internet would not have been able to develop without the protection of such a law.

“If we were held liable for everything that the users potentially posted…we fundamentally would not be able to exist,” said Jessica Ashooh, Reddit’s director of policy, at a July forum.

But also in July, one prominent tech company broke with the others to support a “reasonable care” standard like that proposed by Danielle Citron and Benjamin Wittes, law professors at the University of Maryland.  IBM Executive Ryan Hagemann wrote in a blog post that this “would provide strong incentives for companies to limit illegal and illicit behavior online, while also being flexible enough to promote continued online innovation.”

Should online platforms be responsible for deleting objectively harmful content?

Companies should be held legally responsible for quickly identifying and deleting content such as child pornography or the promotion of mass violence or suicide, Hagemann continued. Adding this standard to Section 230 “would add a measure of legal responsibility to what many platforms are already doing voluntarily.”

But Goldman took a different tack. He strongly cautioned against proposals offering Section 230 protections only to defendants who were acting in so-called good faith, warning that “such amorphous eligibility standards would negate or completely eliminate Section 230’s procedural benefits.”

Hagemann, on the other hand, has defended the importance of a compromise-oriented middle ground. Current rhetoric from Congress suggests that changes to the statue are imminent, he said at a panel two weeks after IBM’s statement, and finding a compromise will prevent an extreme knee-jerk reaction from lawmakers who may not view the digital economy with the necessary nuance.

Senators Cruz and Hawley are gunning for effective repeal of Section 230

And as feared, members of Congress such as Cruz and Hawley have skipped right over compromise and started calling for the complete evisceration of Section 230.

Few would claim that Section 230 is perfect; it was written for a digital landscape that has since evolved in previously unimaginable ways. But allowing a body of five commissioners to determine the vague standard of “politically neutral” every two years would almost certainly lead to extreme inconsistency and partisanship.

Moreover, some fear that — contrary to Hawley’s stated intent — his bill might actually be the one thing that cements the major tech giants in their current place of power.

“Even if its initial application were limited to websites above a certain size threshold, that threshold would be inherently arbitrary and calls to lower it to cover more websites would be inevitable,” said TechFreedom President Berin Szóka.

Rather than keeping tech giants like Facebook and Google in check, conditioning Section 230 protections on perceived neutrality could actually benefit them by stifling any potential competition.

“At a time when we’re talking about antitrust investigations and we’re wondering if the biggest players are too big, the last thing we want to do is make a law that makes it harder for smaller companies to compete,” said Ashooh of Reddit.

“Admittedly, it feels strange to tout Section 230’s pro-competitive effect in light of the dominant marketplace positions of the current Internet giants, who acquired their dominant position in part due to Section 230 immunity,” wrote Goldman. “At the same time, it’s likely short-sighted to assume that the Internet industry has reached an immutable configuration of incumbents.”

Other articles in this series:

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Development Associate Emily McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for campus publication Student Life. She is a founding board member of Code Open Sesame, an organization that teaches computer skills to underprivileged children in six cities across Southern California.

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Section 230

Judge Rules Exemption Exists in Section 230 for Twitter FOSTA Case

Latest lawsuit illustrates the increasing fragility of Section 230 legal protections.

Published

on

Twitter CEO Jack Dorsey.

August 24, 2021—A California court has allowed a lawsuit to commence against Twitter from two victims of sexual trafficking, who allege the social media company initially refused to remove content that exploited the underaged plaintiffs – and then went viral.

The anonymous plaintiffs allege that they were manipulated into making pornographic videos of themselves through another social media app, Snapchat, after which the videos were posted on Twitter. When the plaintiffs asked Twitter to take down the posts, it refused, and it was only after the Department of Homeland Security got involved that the social media company complied.

At issue in the case is whether Twitter had any obligation to remove the content at least “immediately” under Section 230 of the Communications Decency Act, which provides legal liability protections for the content the platforms’ users post.

Court’s finding

The court ruled Thursday that the case should proceed after finding that Twitter knowingly knew such content was on the site, had to have known it was sex trafficking, and refused to do something about it immediately.

“The Court finds that these allegations are sufficient to allege an ongoing pattern of conduct amounting to a tacit agreement with the perpetrators in this case to allow them to post videos and photographs it knew or should have known were related to sex trafficking without blocking their accounts or the Videos,” the decision read.

“In sum, the Court finds that Plaintiffs have stated a claim for civil liability under the [Trafficking Victims Protection Reauthorization Act] on the basis of beneficiary liability and that the claim falls within the exemption to Section 230 immunity created by FOSTA.”

The Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act that became the package law SESTA-FOSTA was passed in 2018 and amended immunity claims under Section 230 to exclude enforcement of federal or state sex trafficking laws from intermediary protections.

The court dismissed other claims against the company made by the plaintiffs, but met the relatively low bar to move the case forward.

The arguments

The plaintiffs allege that Twitter violated the TVPRA because it allegedly knew about the videos, benefitted from them and did nothing to address the problem before it went viral.

Twitter argued that FOSTA, as applied to the CDA, only narrowly applies to websites that are “knowingly assisting and profiting from reprehensible crimes;” the plaintiffs allegedly fail to show that the company “affirmatively participated” in such crimes; and the company cannot be held liable “simply because it did not take the videos down immediately.”

Experts asserted companies may hesitate to bring Section 230 defense in court

The case is yet another instance of U.S. courts increasingly poking holes in arguments brought by technology companies that suggests they cannot be liable for content on their platforms, per Section 230, which is currently the subject of hot debate in Washington about whether to reform it or completely abolish it.

A number of state judges have ruled against Amazon, for example, and its Section 230 defense in a number of case-specific instances in Texas and California. Experts on a panel in May said if courts keep ruling against the defense, there may be a deluge of lawsuits to come against companies.

And last month, citing some of these cases, lawyers argued that big tech companies may begin to shy away from bringing the 230 defense to court in fear of awakening lawmakers to changing legal views on the provision that could ignite its reform.

Continue Reading

Section 230

Facebook, Google, Twitter Register to Lobby Congress on Section 230

Companies also want to discuss cybersecurity, net neutrality, taxes and privacy.

Published

on

Facebook CEO Mark Zuckerberg

August 3, 2021 — The largest social media companies have registered to lobby Congress on Section 230, according to lobby records.

Facebook, Google, and Twitter filed new paperwork late last month to discuss the internet liability provision under the Communications Decency Act, which protects these companies from legal trouble for content their users post.

Facebook’s registration specifically mentions the Safe Tech Act, an amendment to the provision proposed earlier this year by Sens. Amy Klobuchar, D-Minnesota, Mark Warner, D-Virginia, and Mazie Hirono, D-Hawaii, which would largely keep the provision’s protections except for content the platforms are paid for.

A separate Facebook registration included discussion on the “repeal” of the provision.

Other issues included in the Menlo Park-based company’s registration are privacy, data security, online advertising, and general regulations on the social media industry.

Google also wants to discuss taxes and cybersecurity, as security issues take center stage following high-profile attacks and as international proposals for a new tax regime on tech companies emerge.

Notable additional subject matters Twitter includes in its registration are content moderation practices, data security, misinformation, and net neutrality, as the Federal Communications Commission is being urged to bring back Obama-era policies friendly to the principle that ensures content cannot be given preferential treatment on networks.

Section 230 has gripped Congress

Social media critics have been foaming at the mouth over possible retaliatory measures against the technology companies that have taken increasingly strong measures against those that violate its policies.

Those discussions picked up steam when, at the beginning of the year, former President Donald Trump was banned from Twitter, and then from Facebook and other platforms, for allegedly stoking the Capitol Hill riot on January 6. (Trump has since filed a lawsuit as a private citizen against the social media giants for his removal.)

Since the Capitol riot, a number of proposals have been put forward to amend — in some cases completely repeal — the provision to address what some Republicans are calling outright censorship by social media companies. Even Florida tried to take matters into its own hands when it made law rules that penalized social media companies that banned politicians. That law has since been put on hold by the courts.

The social media giants, and its allies in the industry, have pressed the importance of the provision, which they say have allowed once-fledgling companies like Facebook to be what it is today. And some representatives think reform of the law could lean more toward amendment than outright repeal. But lawyers have warned about a shift in attitude toward those liability protections, as more judges in courts across the country hold big technology companies accountable for harm caused by the platforms.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending