Connect with us

Section 230

Attorney General Bill Barr Calls for ‘Recalibrated’ Section 230 as Justice Department Hosts Tech Immunity Workshop

Published

on

Photo of Attorney General Bill Barr in May 2019 by Shane McCoy used with permission

WASHINGTON, February 19, 2020 – Attorney General William Barr laid out the case for “recalibrating” Section 230 of the Communications Decency Act in response to what he called a concentrated power over information that resides in the hands of Silicon Valley tech companies.

Because “the big tech platforms of today often monetize” their power through advertising, “their financial incentives in content distribution may not always align with what is best for the user,” Barr said in remarks kicking off a Wednesday workshop at the U.S. Justice Department.

Originally a non-controversial law seen as a means to incentivize online free speech, Section 230 has come to be seen as amplifying the ills wrought by information technology. Populists on the right and progressives on the left are now calling for changes to Section 230.

The Justice Department’s workshop may be an effort to put Section 230 protections for tech companies on the chopping block.

At the same time, Barr’s agency is leading a major antitrust inquiry into the tech sector, potentially up to and including efforts to break up Google or Facebook.

“While the department’s antitrust review is looking at these developments from a competition perspective, we must also recognize what this concentration means for Section 230 immunity,” Barr said.

Background about the origins of Section 230

Section 230 became law as part of the 1996 Telecommunications Act. In those early days of the internet, Section 230 arose against a backdrop of online service providers such as America Online, CompuServe, and Prodigy. CompuServe did not engage in any form of content moderation, whereas Prodigy positioned itself as a family-friendly alternative by enforcing content guidelines and screening offensive language.

It didn’t take long for both platforms to be sued for defamation. In the 1991 case Cubby v. CompuServe, the federal district court in New York ruled that CompuServe could not be held liable for third party content of which it had no knowledge, similar to a newsstand or library.

But in 1995, the New York supreme court ruled in Stratton Oakmont v. Prodigy that the latter platform had taken on liability for all posts simply by attempting to moderate some, constituting editorial control.

The decision prompted pro-technology representatives Ron Wyden, D-Ore., and Rep. Chris Cox, R-Calif., to introduce an amendment to the Communications Decency Act, ensuring that providers of an interactive computer service would not be held liable for third-party content, thus allowing them to moderate with impunity.

See Broadband Breakfast’s four-part series on the CDA:

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Barr blasts the ‘many’ problems with a ‘broad Section 230 immunity’

At the Justice Department on Wednesday, Barr made his views known for changing the law. He said that “the Department of Justice is concerned about the expansive reach of Section 230.” He complained that Section 230 blunts the impact of civil tort lawsuits that should have a greater bite in complementing criminal law enforcement efforts of the Justice Department.

In particular, he said, “the Anti-Terrorism Act provides civil redress for victims of terrorist attacks on top of the criminal terrorism laws, yet judicial construction of Section 230 has severely diminished the reach of this civil tool.”

Second, he said that “broad Section 230 civil immunity” can actually be used against the federal government. That was something, he said, that was not intended by the framers of Section 230.

Third, Barr said that Section 230 makes it harder to police “lawless spaces” online. “We are concerned that internet services, under the guise of Section 230, can not only block access to law enforcement — even when officials have secured a court-authorized warrant — but also prevent victims from civil recovery.”

“The concerns regarding Section 230 are many and not all the same,” Barr concluded. And yet he added: “We must also recognize the benefits that Section 230 and technology have brought to our society, and ensure that the proposed cure is not worse than the disease.”

First panel at the Justice Department workshop addressed free speech in light of Section 230

The first panel focused on the issue of liability for speech that takes place on tech platforms.

Section 230’s use of the word “publisher” makes it clear that this statute refers to defamation law, said Annie McAdams, a lead counsel in lawsuits against Backpage.com and Facebook over human trafficking.

The 26 words in Section 230 (c)(1) read: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But as is often the case with debates about Section 230, McAdams was met with pushback by her fellow panelists.

While legal experts usually cite the Stratton Oakmont v. Prodigy decision is discussing Section 230, Fordham University School of Law Professor Benjamin Zipursky said he saw state tort law as a crucial element to understanding the law.

Zipursky referring to the entirety of Section 230 subsection (c). Subsection (c)(1) is about “treatment of publisher or speaker.” But subsection (c)(2) concerns the  broader issue of civil liability for actions taken by tech companies to restrict indecent material on their platforms.

Indeed, the entire Section 230(c) is subtitled as a “protection for ‘Good Samaritan’ blocking and screening of offensive material.”

According to Zipursky, after Prodigy was sued for attempting to filter information to some degree, companies wanted to avoid filtering at all costs so that they wouldn’t be considered a “publisher.”

Zipursky compared companies’ weariness to filter content with the legal obligations of those who provide emergency medical care.

Before state-wide “Good Samaritan” laws were passed, people who performed CPR or other emergency medical care were liable for any damages or injuries. Now, “good Samaritans” are protected for their effort to help and save, just as are  “interactive computer services” under Section 230.

Zipursky agreed that Section 230 has defamation-related language, but that McAdams’ interpretation isn’t a “realistic way” to view it, said Zipursky.

WilmerHale Partner Patrick Carome said that Section 230 liability protections were intended to be vast, and not just limited to defamation. And that is because of the excessive amounts of content that these platforms have to manage, he said.

Carome defended Section 230, arguing that it fostered an environment where small companies can also succeed. Allowing companies to self-moderate makes room for future companies to continue developing, and that Section 230 does what good laws do, he said: It “puts the focus on the actual wrongdoers.”

He also took vigorous exception to Attorney General Barr’s statement about Section 230 cutting into the ability of victims of terrorism to get compensation from platforms. “It’s just flat out wrong,” said Carome. “Those cases are for the most part being decided not on Section 230 grounds.”

Was Section 230 designed to limit liability for more than just publishing?

United States Naval Academy Professor Jeff Kosseff agreed that Section 230 cast a broader net. The authors of Section 230 did not intend for the law to be narrow and solely for defamation, he said. While he said that he anticipated political forces leading eventually to changes in Section 230,  those changes need to be carefully constructed so as to not stifle competition.

But Carrie Goldberg, a victims’ rights attorney who specializes in revenge porn case, agreed with McAdams. One of Goldberg’s clients attempted to sue Grindr, when an abusive ex impersonated him and sent his geolocation to several people through the app.

Section 230 is being used as an excuse to not intervene, she said. Section 230 puts users in danger and denies them “access to justice,” she said.

In response to Carome’s remark that Goldberg’s client should have been aided by the criminal justice system, Goldberg said Section 230 needs to be reformed because “it’s gone too far.”

Carome pushed back, reminding panelists that Section 230 does not only pertain to big tech. It protects the thousands of sites that would not survive “10,000 bites of litigation,” said Carome.

Zipursky advocated for “crafting a middle path” compared to the current law, instead of moving ahead with a “kneejerk reaction.”

Section 230’s impact upon criminal conduct

The second panel of the day focused on whether Section 230 liability had facilitated criminal activity.

University of Miami Professor Mary Anne Franks noted the irony of Section 230’s Good Samaritan clause. It does not model helpful behavior, but rather amplifies harm and profits from it, said Franks. Indeed, she said that subsection (c)(1) actually disincentivizes the tech platforms like Google and Facebook from acting as “good Samaritans.”

But Kate Klonick, a professor  at St. John’s University, disagreed. Large tech players like Facebook have economic incentives to avoid bad press. Additionally, it knows that advertisers do not want ads nearby or associated with harmful content, she said.

Moreover, tech develops at such rapid speeds it is difficult to foresee the consequences of rushed responses to the current law, said Klonick.

Computer and Communications Industry Association President Matt Schruers said that many of the larger companies have already taken initiative in following Section 230 and reporting harmful activity.

Schruers said that Section 230 does generate positive incentives because it allows companies to build platforms without fear of litigation. Removing subsection (c)(1) would result in a “heckler’s veto”: Important content on tech platforms would be deleted out of fear of liability.

When asked about the future possibility of artificial intelligence regulating bad content, Franks said the tech industry is always promising to fix tech issues with more tech. She called this an “illusion.”

As an example of big tech facilitating crime, Franks brought up Facebook Live. When Facebook Live was created, people were livestreaming crimes like rape and murder, said Franks. “This is the world that Section 230 built.”

Mark Zuckerberg did not kill anyone, countered Klonick, to applause from the audience. She said Franks was taking issue with humanity, and that Facebook was just the tool used to exhibit these actions.

Section 230

Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case

Panelists disagreed over the merits of Section 230’s protections and the extent to which they apply.

Published

on

Screenshot of speakers at the Federalist Society webinar

WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

Continue Reading

Section 230

Supreme Court Seeks Biden Administration’s Input on Texas and Florida Social Media Laws

The court has not yet agreed to hear the cases, but multiple justices have commented on their importance.

Published

on

Photo of Solicitor General Elizabeth Prelogar courtesy of the U.S. Department of Justice

WASHINGTON, January 24, 2023 — The Supreme Court on Monday asked for the Joe Biden administration’s input on a pair of state laws that would prevent social media platforms from moderating content based on viewpoint.

The Republican-backed laws in Texas and Florida both stem from allegations that tech companies are censoring conservative speech. The Texas law would restrict platforms with at least 50 million users from removing or demonetizing content based on “viewpoint.” The Florida law places significant restrictions on platforms’ ability to remove any content posted by members of certain groups, including politicians.

Two trade groups — NetChoice and the Computer & Communications Industry Association — jointly challenged both laws, meeting with mixed results in appeals courts. They, alongside many tech companies, argue that the law would violate platforms’ First Amendment right to decide what speech to host.

Tech companies also warn that the laws would force them to disseminate objectionable and even dangerous content. In an emergency application to block the Texas law from going into effect in May, the trade groups wrote that such content could include “Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders,”

The Supreme Court has not yet agreed to hear the cases, but multiple justices have commented on the importance of the issue.

In response to the emergency application in May, Justice Samuel Alito wrote that the case involved “issues of great importance that will plainly merit this Court’s review.” However, he disagreed with the court’s decision to block the law pending review, writing that “whether applicants are likely to succeed under existing law is quite unclear.”

Monday’s request asking Solicitor General Elizabeth Prelogar to weigh in on the cases allows the court to put off the decision for another few months.

“It is crucial that the Supreme Court ultimately resolve this matter: it would be a dangerous precedent to let government insert itself into the decisions private companies make on what material to publish or disseminate online,” CCIA President Matt Schruers said in a statement. “The First Amendment protects both the right to speak and the right not to be compelled to speak, and we should not underestimate the consequences of giving government control over online speech in a democracy.”

The Supreme Court is still scheduled to hear two other major content moderation cases next month, which will decide whether Google and Twitter can be held liable for terrorist content hosted on their respective platforms.

Continue Reading

Section 230

Google Defends Section 230 in Supreme Court Terror Case

‘Section 230 is critical to enabling the digital sector’s efforts to respond to extremist[s],’ said a tech industry supporter.

Published

on

Photo of ISIS supporter by HatabKhurasani from Wikipedia

WASHINGTON, January 13, 2023 – The Supreme Court could trigger a cascade of internet-altering effects that will encourage the proliferation of offensive speech and the suppression of speech and create a “litigation minefield” if it decides Google is liable for the results of terrorist attacks by entities publishing on its YouTube platform, the search engine company argued Thursday.

The high court will hear the case of an America family whose daughter Reynaldo Gonzalez was killed in an ISIS terrorist attack in Paris in 2015. The family sued Google under the AntiTerrorism Act for the death, alleging YouTube participated as a publisher of ISIS recruitment videos when it hosted them and its algorithm shared them on the video platform.

But in a brief to the court on Thursday, Google said it is not liable for the content published by third parties on its website according to Section 230 of the Communications Decency Act, and that deciding otherwise would effectively gut platform protection provision and “upend the internet.”

Denying the provision’s protections for platforms “could have devastating spillover effects,” Google argued in the brief. “Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user. If plaintiffs could evade Section 230(c)(1) by targeting how websites sort content or trying to hold users liable for liking or sharing articles, the internet would devolve into a disorganized mess and a litigation minefield.”

It would also “perversely encourage both wide-ranging suppression of speech and the proliferation of more offensive speech,” it added in the brief. “Sites with the resources to take down objectionable content could become beholden to heckler’s vetoes, removing anything anyone found objectionable.

“Other sites, by contrast, could take the see-no-evil approach, disabling all filtering to avoid any inference of constructive knowledge of third-party content,” Google added. “Still other sites could vanish altogether.”

Google rejected the argument that recommendations by its algorithms conveys an “implicit message,” arguing that in such a world, “any organized display [as algorithms do] of content ‘implicitly’ recommends that content and could be actionable.”

The Supreme Court is also hearing a similar case simultaneously in Twitter v. Taamneh.

The Section 230 scrutiny has loomed large since former President Donald Trump was banned from social media platforms for allegedly inciting the Capitol Hill riots in January 2021. Trump and conservatives called for rules limited that protection in light of the suspensions and bans, while the Democrats have not shied away from introducing legislation limited the provision if certain content continued to flourish on those platforms.

Supreme Court Justice Clarence Thomas early last year issued a statement calling for a reexamination of tech platform immunity protections following a Texas Supreme Court decision that said Facebook was shielded from liability in a trafficking case.

Meanwhile, startups and internet associations have argued for the preservation of the provision.

“These cases underscore how important it is that digital services have the resources and the legal certainty to deal with dangerous content online,” Matt Schruers, president of the Computer and Communications Industry Association, said in a statement when the Supreme Court decided in October to hear the Gonzalez case.

“Section 230 is critical to enabling the digital sector’s efforts to respond to extremist and violent rhetoric online,” he added, “and these cases illustrate why it is essential that those efforts continue.”

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending