Connect with us

Section 230

Senators Discuss Section 230 Shortcomings and Potential Reforms

Published

on

Screenshot of Sen. John Thune from the webcast

July 28, 2020 — Senators on Tuesday remained broadly divided on the extent and direction that changes to Section 230 should take.

The tenor of the discussion at a Senate Commerce Communications Subcommittee hearing suggested that the law was overdue for an overhaul, as senator after senator criticized what the internet had become.

But proposals for concrete change were fewer. Subcommittee Chairman John Thune, R-S.D., and Ranking Member Brian Schatz, D-Hawaii, for example, introduced the Platform Accountability and Consumer Transparency Act calling for procedural transparency.

Some on the right, including Sen. Ted Cruz, R-Texas, and full committee Chairman Roger Wicker, R-Miss., offered both broad and narrow critiques of Section 230. On the left, Sen. Richard Blumenthal said the PACT Act didn’t go far enough.

And still others, including Sens. Amy Klobuchar, D-Minn., and Sen. Jacky Rosen, D-N.V., weighed into concerns about the intersection of artificial intelligence and the law.

Screenshot of Sen. Amy Klobuchar participating in the hearing remotely

A voice of caution against changes to Section 230

Witnesses warned against making hasty changes to the statute, with former Rep. Christopher Cox, a co-author of Section 230, pointing out the foundational role it had played in the development of the digital world since its inclusion as part of the 1996 Telecom Act.

“It’s important to remember just how much human activity is encompassed within this vast category we so casually refer to as the internet,” Cox said. “To the extent that any new legislation imposes too much compliance burden or too much liability exposure that’s connected to a website’s hosting of user created content, the risk is that too many websites will be forced to respond by getting rid of user generated content altogether.”

Also sounding a voice of caution was Jeff Kosseff, assistant professor of cyber science at the U.S. Naval Academy, who said that it was important to gather more facts before adjusting the law.

Screenshot of Jeff Kosseff, assistant professor at the U.S. Naval Academy, participating in the hearing remotely

“I don’t think we’re at the point of being able to reform, because we have so many competing viewpoints about what platforms should be doing on top of what we could require them to do because of the First Amendment, and other requirements,” he said.

Cox agreed, adding that another immediate challenge was to figure out what was actually doable. Reforming Section 230 seemed like a more daunting task than initially writing it had been, he said.

PACT Act would aim to increase platform accountability

The varied approaches that tech platforms take to objectionable content has “led to a limited ability for consumers to address and correct harms that occur online,” Thune said. “And as Americans conduct more and more of their activities online, the net outcome is an increasingly less protected and more vulnerable consumer.”

Thune and Schatz introduced the PACT Act in June. Thune said the bill would increase transparency without damaging the economic, innovative and entrepreneurial benefits stimulated by Section 230.

Screenshot of Sen. Brian Schatz participating in the hearing remotely

It would require platforms to post their content moderation procedures, submit quarterly reports to the Federal Trade Commission explaining content moderation decisions, define a prompt complaint and response system and implement a toll-free customer service line.

“Section 230 proponents say that Congress can’t possibly change this law without disrupting all of the great innovation that it has enabled, and I just disagree with that,” Schatz said. “The legislative process is about making sure that our laws are in the public interest.”

Blumenthal agreed with Thune and Schatz about the importance of increasing platform accountability.

“If there’s a message to the industry here, it is [that] the need for reform is now,” he said. “There’s a broad consensus that Section 230, as it presently exists, no longer affords sufficient protection to the public, to consumers, to victims and survivors of abuse.”

However, Blumenthal warned that the PACT Act did not go far enough, emphasizing the traumatic and lengthy process currently required in order for individuals to get abusive imagery such as child pornography removed from online platforms, involving obtaining a court order and locating all instances of the content.

Screenshot of Sen. Richard Blumenthal from the webcast

“I’m very concerned about the burden that’s placed on the victims and survivors,” he said. “The PACT Act does not provide any incentive for Facebook to police its own platform.”

Hate speech and algorithmic discrimination

“Most powerful online intermediaries today are anything but publishers and distributors of user generated content,” said Fordham Law Professor Olivier Sylvain. “They harvest, sort and repurpose user posts and personal data to attract and hold consumer attention, and more importantly, to market these valuable data to advertisers…The result is too often lived harm.”

Sylvain pointed to Facebook’s practice of collecting data on users to categorize them across hundreds of dimensions using automated processes.

“Under civil rights law, Congress forbids discrimination in ads on the basis of race, ethnicity, age and gender in the markets for housing, education and consumer credit,” he said. “But that is exactly what Facebook allowed building managers and employers to do.”

Screenshot of Olivier Sylvain, professor at Fordham University, participating in the hearing remotely

Klobuchar took a similar angle, highlighting certain ads targeted at African American-focused webpages during the 2016 election that told viewers they should vote by texting a falsified number that rather than waiting at the polls.

“One of the issues commonly raised regarding content moderation across multiple platforms is the presence of bias in artificial intelligence systems that are used to analyze the content,” Rosen said. “Decisions made through AI systems, including for content moderation, run the risk of further marginalizing and censoring groups that already face disproportionate prejudice and discrimination, both online and offline.”

In addition, content moderation often misses dangerous hate speech, Rosen continued, pointing out the antisemitic posts found to have been made by the Tree of Life synagogue shooter on a right-wing media platform prior to his deadly attack.

“There’s so much work to be done in this area, because despite the best efforts of even the most well-motivated social media platforms, we see examples where the algorithms don’t work…I think the most troubling challenge for writing law in this area is, what about the great middle ground, where the platforms are not bad actors, they’re trying to do the right thing, but it just doesn’t amount to enough?” Cox said.

Complexities of content moderation practices

“Is there an approach by which we can incentivize active, clear and consistent content moderation without the negative consequences of less open platforms and fewer new entrants into the internet ecosystem?” Sen. Tammy Baldwin, D-Wis., asked.

“I think you really hit the nail on the head in terms of what the challenge is here,” Kosseff said.

Rather than an overly prescriptive approach, Kosseff recommended moving toward transparency, adding that some platforms have already begun to take steps in that direction.

Witnesses emphasized the difficulty of large-scale content moderation for social media platforms.

 “The scale of these efforts is staggering,” said Elizabeth Banker, deputy general counsel of the Internet Association. “Facebook took action against 1.9 billion pieces of spam in a three-month period. In multiple cases, Section 230 has shielded providers from lawsuits from spammers who sued over removing their spam material.”

However, some senators were less willing to extend tech platforms the benefit of the doubt.

“The reality is that platforms have a strong incentive to exercise control over the content each of us sees, because if they can present us with content that will keep us engaged on the platform, we will stay on the platform longer,” Thune said.

Screenshot of Sen. Ted Cruz from the webcast

Cruz repeated his oft-made claims of anti-conservative bias and censorship on social media platforms.

“Given the monopoly power they have over free speech, I view that as the single greatest threat to our democratic process we have today,” he said.

‘Otherwise objectionable’ is not overly vague, according to author of Section 230

The hearing also featured discussion of the Commerce Department’s petition on Monday asking the Federal Communications Commission to issue proposed rules narrowing Section 230’s protections, under the direction of an executive order from President Donald Trump.

Cox pointed out that the original iteration of the bill that evolved into Section 230 contained a provision explicitly denying the FCC authority to regulate the content of speech.

“I would like to see the FTC be more active in this area — I’d like to see the FTC holding platforms to their promises,” Cox added.

Screenshot of former Rep. Christopher Cox participating in the hearing remotely

One of the potential ambiguities raised by the petition was the phrase “otherwise objectionable.”

“I question whether this term is too broad and improperly shields online platforms from liability when they remove content that they simply disagree with, dislike or find distasteful,” Wicker said. “The term may require further defining to reduce ambiguity, increase accountability and prevent misapplication of the law.”

Cox explained that ‘otherwise objectionable’ should be understood with reference to the list of specific offenses preceding it, adding that it was “not an open-ended granted immunity for editing content for any unrelated reason a website can think of.”

Section 230

Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case

Panelists disagreed over the merits of Section 230’s protections and the extent to which they apply.

Published

on

Screenshot of speakers at the Federalist Society webinar

WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

Continue Reading

Section 230

Supreme Court Seeks Biden Administration’s Input on Texas and Florida Social Media Laws

The court has not yet agreed to hear the cases, but multiple justices have commented on their importance.

Published

on

Photo of Solicitor General Elizabeth Prelogar courtesy of the U.S. Department of Justice

WASHINGTON, January 24, 2023 — The Supreme Court on Monday asked for the Joe Biden administration’s input on a pair of state laws that would prevent social media platforms from moderating content based on viewpoint.

The Republican-backed laws in Texas and Florida both stem from allegations that tech companies are censoring conservative speech. The Texas law would restrict platforms with at least 50 million users from removing or demonetizing content based on “viewpoint.” The Florida law places significant restrictions on platforms’ ability to remove any content posted by members of certain groups, including politicians.

Two trade groups — NetChoice and the Computer & Communications Industry Association — jointly challenged both laws, meeting with mixed results in appeals courts. They, alongside many tech companies, argue that the law would violate platforms’ First Amendment right to decide what speech to host.

Tech companies also warn that the laws would force them to disseminate objectionable and even dangerous content. In an emergency application to block the Texas law from going into effect in May, the trade groups wrote that such content could include “Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders,”

The Supreme Court has not yet agreed to hear the cases, but multiple justices have commented on the importance of the issue.

In response to the emergency application in May, Justice Samuel Alito wrote that the case involved “issues of great importance that will plainly merit this Court’s review.” However, he disagreed with the court’s decision to block the law pending review, writing that “whether applicants are likely to succeed under existing law is quite unclear.”

Monday’s request asking Solicitor General Elizabeth Prelogar to weigh in on the cases allows the court to put off the decision for another few months.

“It is crucial that the Supreme Court ultimately resolve this matter: it would be a dangerous precedent to let government insert itself into the decisions private companies make on what material to publish or disseminate online,” CCIA President Matt Schruers said in a statement. “The First Amendment protects both the right to speak and the right not to be compelled to speak, and we should not underestimate the consequences of giving government control over online speech in a democracy.”

The Supreme Court is still scheduled to hear two other major content moderation cases next month, which will decide whether Google and Twitter can be held liable for terrorist content hosted on their respective platforms.

Continue Reading

Section 230

Google Defends Section 230 in Supreme Court Terror Case

‘Section 230 is critical to enabling the digital sector’s efforts to respond to extremist[s],’ said a tech industry supporter.

Published

on

Photo of ISIS supporter by HatabKhurasani from Wikipedia

WASHINGTON, January 13, 2023 – The Supreme Court could trigger a cascade of internet-altering effects that will encourage the proliferation of offensive speech and the suppression of speech and create a “litigation minefield” if it decides Google is liable for the results of terrorist attacks by entities publishing on its YouTube platform, the search engine company argued Thursday.

The high court will hear the case of an America family whose daughter Reynaldo Gonzalez was killed in an ISIS terrorist attack in Paris in 2015. The family sued Google under the AntiTerrorism Act for the death, alleging YouTube participated as a publisher of ISIS recruitment videos when it hosted them and its algorithm shared them on the video platform.

But in a brief to the court on Thursday, Google said it is not liable for the content published by third parties on its website according to Section 230 of the Communications Decency Act, and that deciding otherwise would effectively gut platform protection provision and “upend the internet.”

Denying the provision’s protections for platforms “could have devastating spillover effects,” Google argued in the brief. “Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user. If plaintiffs could evade Section 230(c)(1) by targeting how websites sort content or trying to hold users liable for liking or sharing articles, the internet would devolve into a disorganized mess and a litigation minefield.”

It would also “perversely encourage both wide-ranging suppression of speech and the proliferation of more offensive speech,” it added in the brief. “Sites with the resources to take down objectionable content could become beholden to heckler’s vetoes, removing anything anyone found objectionable.

“Other sites, by contrast, could take the see-no-evil approach, disabling all filtering to avoid any inference of constructive knowledge of third-party content,” Google added. “Still other sites could vanish altogether.”

Google rejected the argument that recommendations by its algorithms conveys an “implicit message,” arguing that in such a world, “any organized display [as algorithms do] of content ‘implicitly’ recommends that content and could be actionable.”

The Supreme Court is also hearing a similar case simultaneously in Twitter v. Taamneh.

The Section 230 scrutiny has loomed large since former President Donald Trump was banned from social media platforms for allegedly inciting the Capitol Hill riots in January 2021. Trump and conservatives called for rules limited that protection in light of the suspensions and bans, while the Democrats have not shied away from introducing legislation limited the provision if certain content continued to flourish on those platforms.

Supreme Court Justice Clarence Thomas early last year issued a statement calling for a reexamination of tech platform immunity protections following a Texas Supreme Court decision that said Facebook was shielded from liability in a trafficking case.

Meanwhile, startups and internet associations have argued for the preservation of the provision.

“These cases underscore how important it is that digital services have the resources and the legal certainty to deal with dangerous content online,” Matt Schruers, president of the Computer and Communications Industry Association, said in a statement when the Supreme Court decided in October to hear the Gonzalez case.

“Section 230 is critical to enabling the digital sector’s efforts to respond to extremist and violent rhetoric online,” he added, “and these cases illustrate why it is essential that those efforts continue.”

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending