June 26, 2020 — The debate over Section 230 of the Communications Decency Act spans a broad range of views, with some arguing that it essentially plays the same role as the First Amendment and others claiming that it wrongfully enables significant abuse.
Speaking at a Yale Law School webinar on Thursday, Attorney Cathy Gellis said she was surprised by the speed at which public opinion had turned against the statute.
“Section 230 worked — we got an internet, and the internet has allowed, for the first time in history, all seven billion of us on earth to interact with each other,” she said. “The downside is all seven billion people on earth can now interact with each other, and we don’t really know how to do it very well.”
New tools and norms to better deal with various forms of online speech are likely to develop, but the process will take time, Gellis said.
“I think we can get there, as long as we don’t throw out the baby with the bathwater and revert back artificially to a time before we had this technology that would enable these connections,” she added.
Much of the opposition to Section 230 has come from Republicans such as Sen. Josh Hawley, R-Mo., who argue that it allows platforms to unfairly discriminate against conservative ideas.
David French, attorney and senior editor of conservative political magazine The Dispatch, said he could understand this fear, but argued that it was not the government’s role to prevent such alleged discrimination from happening.
“You better be sure that you’re going to be in charge of the government indefinitely, if you’re going to place the government in that much control,” he said. “As a matter of constitutional principle, I reject that appeal to the government, and as a matter of pragmatics, I think it’s shortsighted.”
Section 230 repeal or modification should be renamed the “Bring Porn to Facebook Act,” French joked, because of the all-or-nothing approach it would force.
“A version of Section 230 would lock in through the First Amendment, which is something that I think a lot of critics of Section 230 are completely overlooking,” French said. “But the getting from A to B would be messy — there would be a blizzard of litigation.”
Big tech platforms, which are the primary target of proposed Section 230 modifications, would likely be able to muscle through this litigation, French predicted. But smaller platforms, he warned, might not have the resources to survive it.
“I don’t think that scaling back the immunity for tech companies is going to suddenly flood the world with litigation,” argued Carrie Goldberg, attorney and founding partner at C.A. Goldberg PLLC.
Section 230 has been in place for the entire history of the internet as it exists today, she added, making it difficult to accurately predict what might happen in its absence.
Section 230 is also criticized for protecting platforms that enable abuse
Platforms being held liable for users’ abuses is comparable to schools being held liable for Title IX violations and landlords being held liable for dangerous conditions on their properties, Goldberg suggested.
“It’s not unusual for third parties to be held responsible when they are being deliberately indifferent, or when their head is in the sand and they’re refusing to see or hear things that happen on their watch,” she said.
Goldberg also rejected the idea that giving individuals the ability to sue companies for harms caused or furthered by their platforms went against the First Amendment.
“Demanding that companies share responsibility for the really extreme abuses that happen on platforms is not attacking speech,” she said.
Many of the platforms currently being protected by Section 230 have expanded far beyond the bulletin-board nature of the earliest interactive internet services, pointed out St. John’s University Law Professor Kate Klonick.
“In 1996, when [Section 230] was drafted, the things that it was really speaking to were message boards and things that were very clearly speech, and now we have all of these apps and all of these types of things that are really performing more services and conduct,” she said.
Section 230 has allowed big tech companies to grow “without any of the pressure that we see in other industries to be creating safe products or to be thinking about how their clients and their users and their customers could be injured if they make bad decisions,” Goldberg said.
As a result, she added, her law firm has been flooded with cases of harm enabled by social media, such as a man who used Tinder to arrange a meeting with a young girl who he then murdered, or rampant child sexual abuse on Instagram.
“I’m not talking about defamation or a bunch of Twitter trolls,” she said.
French agreed that these serious violations complicated the issue, but maintained that it still came down to free speech.
“We have this incredibly difficult challenge that I think is never going to be fully solved,” he said. “And I think we just have to get comfortable with [the idea that] we’ll never fully solve it, because we’re dealing with human beings.”
Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230
The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.
WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.
The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.
The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.
Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.
Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.
“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”
The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”
From Haugen testimony to legislation
Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.
“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.
Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.
Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.
Haugen’s testimony versus Zuckerberg’s Section 230 vision
In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”
This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.
Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”
Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.
Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.
But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.
An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.
Where the parties agree
Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.
Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.
Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.
With reporting from Riley Steward
Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree
While some advocate for a tightening of language, other experts believe Section 230 should not be touched.
WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.
There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions
“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”
Antitrust not necessarily the best means to improve competition policy
For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.
Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.
Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.
“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.
More content moderation needed
Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.
“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”
Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.
“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”
She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.
What problems do critics of Section 230 want addressed?
Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.
Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”
Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.
Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.
“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”
“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.
Judge Rules Exemption Exists in Section 230 for Twitter FOSTA Case
Latest lawsuit illustrates the increasing fragility of Section 230 legal protections.
August 24, 2021—A California court has allowed a lawsuit to commence against Twitter from two victims of sexual trafficking, who allege the social media company initially refused to remove content that exploited the underaged plaintiffs – and then went viral.
The anonymous plaintiffs allege that they were manipulated into making pornographic videos of themselves through another social media app, Snapchat, after which the videos were posted on Twitter. When the plaintiffs asked Twitter to take down the posts, it refused, and it was only after the Department of Homeland Security got involved that the social media company complied.
At issue in the case is whether Twitter had any obligation to remove the content at least “immediately” under Section 230 of the Communications Decency Act, which provides legal liability protections for the content the platforms’ users post.
The court ruled Thursday that the case should proceed after finding that Twitter knowingly knew such content was on the site, had to have known it was sex trafficking, and refused to do something about it immediately.
“The Court finds that these allegations are sufficient to allege an ongoing pattern of conduct amounting to a tacit agreement with the perpetrators in this case to allow them to post videos and photographs it knew or should have known were related to sex trafficking without blocking their accounts or the Videos,” the decision read.
“In sum, the Court finds that Plaintiffs have stated a claim for civil liability under the [Trafficking Victims Protection Reauthorization Act] on the basis of beneficiary liability and that the claim falls within the exemption to Section 230 immunity created by FOSTA.”
The Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act that became the package law SESTA-FOSTA was passed in 2018 and amended immunity claims under Section 230 to exclude enforcement of federal or state sex trafficking laws from intermediary protections.
The court dismissed other claims against the company made by the plaintiffs, but met the relatively low bar to move the case forward.
The plaintiffs allege that Twitter violated the TVPRA because it allegedly knew about the videos, benefitted from them and did nothing to address the problem before it went viral.
Twitter argued that FOSTA, as applied to the CDA, only narrowly applies to websites that are “knowingly assisting and profiting from reprehensible crimes;” the plaintiffs allegedly fail to show that the company “affirmatively participated” in such crimes; and the company cannot be held liable “simply because it did not take the videos down immediately.”
Experts asserted companies may hesitate to bring Section 230 defense in court
The case is yet another instance of U.S. courts increasingly poking holes in arguments brought by technology companies that suggests they cannot be liable for content on their platforms, per Section 230, which is currently the subject of hot debate in Washington about whether to reform it or completely abolish it.
A number of state judges have ruled against Amazon, for example, and its Section 230 defense in a number of case-specific instances in Texas and California. Experts on a panel in May said if courts keep ruling against the defense, there may be a deluge of lawsuits to come against companies.
And last month, citing some of these cases, lawyers argued that big tech companies may begin to shy away from bringing the 230 defense to court in fear of awakening lawmakers to changing legal views on the provision that could ignite its reform.
- Federal Trade Commission Will Likely Not Be Able to Implement Competition Rules, Panelists Say
- House Passes Ban on Chinese Equipment, 3.45 GHz Auction Reaches Reserve Price, Against a ‘Wi-Fi Tax’
- LEO Satellite Technology Should Be in All Schools, Gigabit Libraries Network Says
- Housing, Public Interest Groups Oppose Multitenant Exclusivity Agreements
- Broadband Breakfast on October 27, 2021 — When ‘Greenfield’ Fiber Meets ‘Brownfield’ Multiple Dwelling Units
- Federal Communications Commission Dispenses $544 Million in Rural Broadband Funds
Signup for Broadband Breakfast
Antitrust4 months ago
Experts Disagree Over Need, Feasibility of Global Standards for Antitrust Rules
Broadband Roundup2 months ago
Senators Intro App Bill, Groups Drop TracFone Buy Complaint, States Want Shorter Robocall Deadline
Infrastructure3 months ago
Lumen Responds to Allegations it Underbuilds While Collecting Public Funds
Broadband Roundup2 months ago
Mapping Comment Deadline Extended, AT&T Gets Federal Contract, 5G and LTE Drive Microwave Demand
Antitrust4 months ago
House Judiciary Committee Clears Six Antitrust Bills Targeting Big Tech Companies
Antitrust3 months ago
Daniel Hanley: Federal Communications Commission Must Block Verizon’s Acquisition of TracFone
#broadbandlive2 months ago
Broadband Breakfast on September 1, 2021 — What’s Next for Broadband Infrastructure Legislation?
Section 2303 months ago
Facebook, Google, Twitter Register to Lobby Congress on Section 230