Connect with us

Section 230

Senators Discuss Section 230 Shortcomings and Potential Reforms

Published

on

Screenshot of Sen. John Thune from the webcast

July 28, 2020 — Senators on Tuesday remained broadly divided on the extent and direction that changes to Section 230 should take.

The tenor of the discussion at a Senate Commerce Communications Subcommittee hearing suggested that the law was overdue for an overhaul, as senator after senator criticized what the internet had become.

But proposals for concrete change were fewer. Subcommittee Chairman John Thune, R-S.D., and Ranking Member Brian Schatz, D-Hawaii, for example, introduced the Platform Accountability and Consumer Transparency Act calling for procedural transparency.

Some on the right, including Sen. Ted Cruz, R-Texas, and full committee Chairman Roger Wicker, R-Miss., offered both broad and narrow critiques of Section 230. On the left, Sen. Richard Blumenthal said the PACT Act didn’t go far enough.

And still others, including Sens. Amy Klobuchar, D-Minn., and Sen. Jacky Rosen, D-N.V., weighed into concerns about the intersection of artificial intelligence and the law.

Screenshot of Sen. Amy Klobuchar participating in the hearing remotely

A voice of caution against changes to Section 230

Witnesses warned against making hasty changes to the statute, with former Rep. Christopher Cox, a co-author of Section 230, pointing out the foundational role it had played in the development of the digital world since its inclusion as part of the 1996 Telecom Act.

“It’s important to remember just how much human activity is encompassed within this vast category we so casually refer to as the internet,” Cox said. “To the extent that any new legislation imposes too much compliance burden or too much liability exposure that’s connected to a website’s hosting of user created content, the risk is that too many websites will be forced to respond by getting rid of user generated content altogether.”

Also sounding a voice of caution was Jeff Kosseff, assistant professor of cyber science at the U.S. Naval Academy, who said that it was important to gather more facts before adjusting the law.

Screenshot of Jeff Kosseff, assistant professor at the U.S. Naval Academy, participating in the hearing remotely

“I don’t think we’re at the point of being able to reform, because we have so many competing viewpoints about what platforms should be doing on top of what we could require them to do because of the First Amendment, and other requirements,” he said.

Cox agreed, adding that another immediate challenge was to figure out what was actually doable. Reforming Section 230 seemed like a more daunting task than initially writing it had been, he said.

PACT Act would aim to increase platform accountability

The varied approaches that tech platforms take to objectionable content has “led to a limited ability for consumers to address and correct harms that occur online,” Thune said. “And as Americans conduct more and more of their activities online, the net outcome is an increasingly less protected and more vulnerable consumer.”

Thune and Schatz introduced the PACT Act in June. Thune said the bill would increase transparency without damaging the economic, innovative and entrepreneurial benefits stimulated by Section 230.

Screenshot of Sen. Brian Schatz participating in the hearing remotely

It would require platforms to post their content moderation procedures, submit quarterly reports to the Federal Trade Commission explaining content moderation decisions, define a prompt complaint and response system and implement a toll-free customer service line.

“Section 230 proponents say that Congress can’t possibly change this law without disrupting all of the great innovation that it has enabled, and I just disagree with that,” Schatz said. “The legislative process is about making sure that our laws are in the public interest.”

Blumenthal agreed with Thune and Schatz about the importance of increasing platform accountability.

“If there’s a message to the industry here, it is [that] the need for reform is now,” he said. “There’s a broad consensus that Section 230, as it presently exists, no longer affords sufficient protection to the public, to consumers, to victims and survivors of abuse.”

However, Blumenthal warned that the PACT Act did not go far enough, emphasizing the traumatic and lengthy process currently required in order for individuals to get abusive imagery such as child pornography removed from online platforms, involving obtaining a court order and locating all instances of the content.

Screenshot of Sen. Richard Blumenthal from the webcast

“I’m very concerned about the burden that’s placed on the victims and survivors,” he said. “The PACT Act does not provide any incentive for Facebook to police its own platform.”

Hate speech and algorithmic discrimination

“Most powerful online intermediaries today are anything but publishers and distributors of user generated content,” said Fordham Law Professor Olivier Sylvain. “They harvest, sort and repurpose user posts and personal data to attract and hold consumer attention, and more importantly, to market these valuable data to advertisers…The result is too often lived harm.”

Sylvain pointed to Facebook’s practice of collecting data on users to categorize them across hundreds of dimensions using automated processes.

“Under civil rights law, Congress forbids discrimination in ads on the basis of race, ethnicity, age and gender in the markets for housing, education and consumer credit,” he said. “But that is exactly what Facebook allowed building managers and employers to do.”

Screenshot of Olivier Sylvain, professor at Fordham University, participating in the hearing remotely

Klobuchar took a similar angle, highlighting certain ads targeted at African American-focused webpages during the 2016 election that told viewers they should vote by texting a falsified number that rather than waiting at the polls.

“One of the issues commonly raised regarding content moderation across multiple platforms is the presence of bias in artificial intelligence systems that are used to analyze the content,” Rosen said. “Decisions made through AI systems, including for content moderation, run the risk of further marginalizing and censoring groups that already face disproportionate prejudice and discrimination, both online and offline.”

In addition, content moderation often misses dangerous hate speech, Rosen continued, pointing out the antisemitic posts found to have been made by the Tree of Life synagogue shooter on a right-wing media platform prior to his deadly attack.

“There’s so much work to be done in this area, because despite the best efforts of even the most well-motivated social media platforms, we see examples where the algorithms don’t work…I think the most troubling challenge for writing law in this area is, what about the great middle ground, where the platforms are not bad actors, they’re trying to do the right thing, but it just doesn’t amount to enough?” Cox said.

Complexities of content moderation practices

“Is there an approach by which we can incentivize active, clear and consistent content moderation without the negative consequences of less open platforms and fewer new entrants into the internet ecosystem?” Sen. Tammy Baldwin, D-Wis., asked.

“I think you really hit the nail on the head in terms of what the challenge is here,” Kosseff said.

Rather than an overly prescriptive approach, Kosseff recommended moving toward transparency, adding that some platforms have already begun to take steps in that direction.

Witnesses emphasized the difficulty of large-scale content moderation for social media platforms.

 “The scale of these efforts is staggering,” said Elizabeth Banker, deputy general counsel of the Internet Association. “Facebook took action against 1.9 billion pieces of spam in a three-month period. In multiple cases, Section 230 has shielded providers from lawsuits from spammers who sued over removing their spam material.”

However, some senators were less willing to extend tech platforms the benefit of the doubt.

“The reality is that platforms have a strong incentive to exercise control over the content each of us sees, because if they can present us with content that will keep us engaged on the platform, we will stay on the platform longer,” Thune said.

Screenshot of Sen. Ted Cruz from the webcast

Cruz repeated his oft-made claims of anti-conservative bias and censorship on social media platforms.

“Given the monopoly power they have over free speech, I view that as the single greatest threat to our democratic process we have today,” he said.

‘Otherwise objectionable’ is not overly vague, according to author of Section 230

The hearing also featured discussion of the Commerce Department’s petition on Monday asking the Federal Communications Commission to issue proposed rules narrowing Section 230’s protections, under the direction of an executive order from President Donald Trump.

Cox pointed out that the original iteration of the bill that evolved into Section 230 contained a provision explicitly denying the FCC authority to regulate the content of speech.

“I would like to see the FTC be more active in this area — I’d like to see the FTC holding platforms to their promises,” Cox added.

Screenshot of former Rep. Christopher Cox participating in the hearing remotely

One of the potential ambiguities raised by the petition was the phrase “otherwise objectionable.”

“I question whether this term is too broad and improperly shields online platforms from liability when they remove content that they simply disagree with, dislike or find distasteful,” Wicker said. “The term may require further defining to reduce ambiguity, increase accountability and prevent misapplication of the law.”

Cox explained that ‘otherwise objectionable’ should be understood with reference to the list of specific offenses preceding it, adding that it was “not an open-ended granted immunity for editing content for any unrelated reason a website can think of.”

Section 230

Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230

The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.

Published

on

Rep. Anna Eshoo, D-California

WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.

The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.

The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.

Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.

Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.

“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”

The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

From Haugen testimony to legislation

Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.

“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.

Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.

Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.

Haugen’s testimony versus Zuckerberg’s Section 230 vision

In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”

This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.

Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”

Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.

Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.

But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.

An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.

Where the parties agree

Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.

Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.

Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.

With reporting from Riley Steward

Continue Reading

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Section 230

Judge Rules Exemption Exists in Section 230 for Twitter FOSTA Case

Latest lawsuit illustrates the increasing fragility of Section 230 legal protections.

Published

on

Twitter CEO Jack Dorsey.

August 24, 2021—A California court has allowed a lawsuit to commence against Twitter from two victims of sexual trafficking, who allege the social media company initially refused to remove content that exploited the underaged plaintiffs – and then went viral.

The anonymous plaintiffs allege that they were manipulated into making pornographic videos of themselves through another social media app, Snapchat, after which the videos were posted on Twitter. When the plaintiffs asked Twitter to take down the posts, it refused, and it was only after the Department of Homeland Security got involved that the social media company complied.

At issue in the case is whether Twitter had any obligation to remove the content at least “immediately” under Section 230 of the Communications Decency Act, which provides legal liability protections for the content the platforms’ users post.

Court’s finding

The court ruled Thursday that the case should proceed after finding that Twitter knowingly knew such content was on the site, had to have known it was sex trafficking, and refused to do something about it immediately.

“The Court finds that these allegations are sufficient to allege an ongoing pattern of conduct amounting to a tacit agreement with the perpetrators in this case to allow them to post videos and photographs it knew or should have known were related to sex trafficking without blocking their accounts or the Videos,” the decision read.

“In sum, the Court finds that Plaintiffs have stated a claim for civil liability under the [Trafficking Victims Protection Reauthorization Act] on the basis of beneficiary liability and that the claim falls within the exemption to Section 230 immunity created by FOSTA.”

The Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act that became the package law SESTA-FOSTA was passed in 2018 and amended immunity claims under Section 230 to exclude enforcement of federal or state sex trafficking laws from intermediary protections.

The court dismissed other claims against the company made by the plaintiffs, but met the relatively low bar to move the case forward.

The arguments

The plaintiffs allege that Twitter violated the TVPRA because it allegedly knew about the videos, benefitted from them and did nothing to address the problem before it went viral.

Twitter argued that FOSTA, as applied to the CDA, only narrowly applies to websites that are “knowingly assisting and profiting from reprehensible crimes;” the plaintiffs allegedly fail to show that the company “affirmatively participated” in such crimes; and the company cannot be held liable “simply because it did not take the videos down immediately.”

Experts asserted companies may hesitate to bring Section 230 defense in court

The case is yet another instance of U.S. courts increasingly poking holes in arguments brought by technology companies that suggests they cannot be liable for content on their platforms, per Section 230, which is currently the subject of hot debate in Washington about whether to reform it or completely abolish it.

A number of state judges have ruled against Amazon, for example, and its Section 230 defense in a number of case-specific instances in Texas and California. Experts on a panel in May said if courts keep ruling against the defense, there may be a deluge of lawsuits to come against companies.

And last month, citing some of these cases, lawyers argued that big tech companies may begin to shy away from bringing the 230 defense to court in fear of awakening lawmakers to changing legal views on the provision that could ignite its reform.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending