Connect with us

Content Moderation, Section 230 and the Future of Online Speech

Our comprehensive report examines the extremely timely issue of content moderation and Section 230 from multiple angles.

Thank you for being a part of the Broadband Breakfast Club. We hope you enjoy our March 2023 special report. Questions? Email drew@breakfast.media

In the 27 years since the so-called “26 words that created the internet” became law, rapid technological developments and sharp partisan divides have fueled increasingly complex content moderation dilemmas.

Earlier this year, the Supreme Court tackled Section 230 for the first time through a pair of cases regarding platform liability for hosting and promoting terrorist content. In addition to the court’s ongoing deliberations, Section 230—which protects online intermediaries from liability for third-party content—has recently come under attack from Congress, the White House and multiple state legislatures.

Many Democrats want the ability to hold online platforms liable for any content they carry, arguing that Section 230 enables disinformation, hate speech and extremism to proliferate unchecked. Many Republicans want the ability to sue online platforms for any content they take down, claiming that Section 230 facilitates widespread censorship of conservative content. President Joe Biden and former President Donald Trump have both called for the repeal of Section 230.

Meanwhile, tech companies have made dramatic claims that “gutting Section 230 would upend the internet and perversely encourage both wide-ranging suppression of speech and the proliferation of more offensive speech,” as Google argued in January. And support for Section 230 extends far beyond Silicon Valley’s tech titans to traditional civil libertarians: The American Civil Liberties Union lamented, “Is this the end of the internet as we know it?”

 

Holding tech companies accountable for online harms

The long-awaited intermediary liability cases argued before the Supreme Court in February, Gonzalez v. Google and Twitter v. Taamneh, were both brought by families of terrorist attack victims who claimed that the platforms knowingly aided terrorism by hosting terrorist content.

Although Supreme Court justices expressed skepticism about the claims, Gonzalez and Taamneh get to the heart of many left-wing concerns about tech companies: that online platforms are failing to prevent—and in some cases, are actively perpetuating—a variety of harms to users.

Democratic critics of Section 230 argue that the statute gives tech platforms far too much leeway to profit off of illicit and dangerous content while avoiding accountability.

“Courts have interpreted Section 230 to protect online classifieds sites from responsibility for advertising sex trafficking [and] online firearms sellers from responsibility for facilitating unlawful gun sales,” wrote Mary Anne Franks, law professor and president of the Cyber Civil Rights Initiative, in a 2021 paper.

Franks and other Section 230 critics frequently highlight the statute’s “Good Samaritan” provision, arguing that such laws are generally meant to encourage positive conduct but that the current interpretation of Section 230 fails to do so. While the “Good Samaritan” title carries no legal weight, many Democrats have proposed making Section 230’s liability protections contingent on robust, good-faith moderation.

“The law was intended, in part, to protect children and promote diversity—not to allow defective products and harmful products to be put on the market,” law professor Angela Campbell said in February.

Section 230 is one of the few surviving components of the 1996 Communications Decency Act, which was largely struck down just a year after its passage. The law was born out of a moral panic about the rapidly increasing accessibility of indecent content online.

Fearing that the government would be unable to adequately fight indecent content, Sen. Ron Wyden, D-Ore., and former Rep. Chris Cox, R-Calif., introduced Section 230 for the explicit purpose of encouraging content moderation.

“Not even a federal internet censorship army would give our government the power to keep offensive material out of the hands of children who use the new interactive media,” Wyden said.

Proponents of changes to Section 230 claim that the statute’s scope has now expanded far beyond this original purpose.

“Lower courts have ironically applied Section 230, entitled ‘protection for private blocking and screening of offensive material,’ to protect from liability sites designed to purvey offensive material,” wrote law professor Danielle Citron and Brookings Senior Fellow Benjamin Wittes in a 2017 paper.

Do tech companies deserve unique protections?

Citron and Wittes say that Section 230 provides online platforms with excessive protections not afforded to other industries. This allows Big Tech to get away with conduct that would not otherwise be tolerated.

“In physical space, a business that arranged private rooms for strangers to meet, knowing that sexual predators were using its service to meet kids, would have to do a great deal more than warn people to proceed ‘at their own peril’ to avoid liability when bad things happened,” the experts wrote.

Victims’ rights attorney Carrie Goldberg argued in a December opinion article that Section 230 has become a “get-out-of-court-free card” for tech companies in cases such as Gonzalez.

“Although no different from prior [Anti-Terrorism Act] lawsuits brought against banks, airlines, charities and governments for the critical roles they played in incidents of terrorism, Google convinced lower courts that even if it did aid and abet terrorism, it’s immune from liability because it’s an internet company,” Goldberg wrote.

Although proposals for changing Section 230 are “bound to see the predictable fearmongering” that they risk destroying the internet, Goldberg noted that the court is not being asked to determine Google’s actual liability. “The only question is whether Google could be liable,” she wrote.

Republican-led states have introduced ‘must carry’ laws 

Unlike the Democratic push for increased platform responsibility and moderation, many Republican lawmakers argue that tech companies should remove as little content as possible—and claim that current moderation practices result in the systemic censorship of conservative ideas.

The Supreme Court is currently considering whether to hear a pair of cases challenging Republican-backed laws in Florida and Texas that would place severe limitations on content moderation practices.

Florida’s S.B. 7072 restricts online platforms’ ability to moderate content, and fully prohibits platforms from restricting content posted by political candidates. It also adds new disclosure and opt-out requirements for content moderation practices. In Texas, H.B. 20 bars platforms with at least 50 million monthly users from removing or demonetizing content based on “viewpoint.”

Both of these “must carry” laws were immediately challenged, and federal appeals courts took different positions on their validity. In May, the Eleventh Circuit largely agreed with a lower court’s decision to block the Florida law from taking effect.

“When platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in First Amendment-protected activity,” Judge Kevin Newsom wrote.

But in September, the Fifth Circuit rejected the challenge to the Texas law, ruling that social media platforms “exercise virtually no editorial control or judgment” beyond the use of algorithms to remove a certain amount of spam and obscene content.

The decision differentiated between the Texas and Florida laws, saying that the latter “prohibits all censorship of some speakers” while the former “prohibits some censorship of all speakers.”

“Texas’s law permits non-viewpoint-based censorship and censorship of certain constitutionally unprotected expression regardless of who the speaker is… instead of singling out political candidates and journalists for favored treatment,” Judge Andrew Oldham wrote.

Congressional Republicans largely oppose content moderation

In February, Sen. Ted Cruz, R-Texas, launched an oversight investigation into several major platforms. “Today’s behemoth social media platforms appear to have adopted the view that a user’s ability to post content does not entitle the user to distribute content,” he wrote. “In other words, as the theory goes, platforms are not restricting speech when they throttle a social media poster’s otherwise benign content, including via recommendations.”

Cruz argued that “this kind of soft censorship is still censorship,” and emphasized his concerns about recommendation algorithms, noting that they have a major effect on what Americans see, think and ultimately believe.

Nearly three dozen House Republicans, led by House Judiciary Committee Jim Jordan, R-Ohio, sent a letter to tech companies last September voicing concerns about “how some in government have sought to use Big Tech to censor divergent viewpoints and silence opposing political speech.”

The letter focused on social media platforms’ temporary ban of an October 2020 New York Post article about Hunter Biden’s laptop, calling the decision “knowing suppression of First Amendment-protected activity.” Despite continued concerns about the article’s veracity, former Twitter CEO Jack Dorsey later admitted that blocking the link from being shared and failing to provide adequate context for the decision was “unacceptable.”

Tech advocates dispute ‘censorship’ claims with data

Silicon Valley executives, alongside a contingent of pro-tech liberals and libertarians, have long protested allegations of conservative censorship.

“Part of the tension on Capitol Hill is the Republicans continue to push this false narrative that tech is anti-conservative,” computer science professor Hany Farid said in 2020. “There is no data to support this. The data that is there is in the other direction and says conservatives dominate social media.”

Conservative organizations and individuals have consistently ranked among the top-performing accounts on Facebook. In 2020, Fox News saw far more user engagement than any other media organization, with 448 million likes, shares and comments. Breitbart came in second with 295 million user interactions, and CNN followed with 191 million.

For its part, Twitter has a history of suspending users with highly different ideologies. In 2018, the social media powerhouse suspended dozens of activists linked to the Occupy movement without providing an explanation. In 2020, it banned 70 accounts affiliated with Democrat Mike Bloomberg’s presidential campaign, claiming they had violated the platform’s spam policy.

Several Republican lawmakers have pointed to the political donations of Silicon Valley employees—which are overwhelmingly directed towards Democrats—as evidence of political bias in the platforms’ content moderation practices.

But very few day-to-day moderation decisions are made by Silicon Valley employees. A 2020 report noted that the “vast majority” of content moderation work is outsourced to third-party vendors, many of which are based outside of the U.S. and depend on “relatively low-paid labor.”

In fact, Facebook has frequently acquiesced to Republican demands. A 2016 anti-propaganda initiative ground to a halt after it was discovered that most of the pages spreading disinformation had a rightward bent. “We can’t remove all of it because it will disproportionately affect conservatives,” top Facebook executive Joel Kaplan reportedly said.

The following year, Kaplan objected to another Facebook initiative aimed at decreasing the overall prioritization of news media and instead emphasizing family and friends. Due to concerns that any demotion of conservative sites would spark renewed censorship allegations, the company ultimately adjusted the algorithm to more heavily downrank left-leaning pages.

Broader research has echoed the idea that a partisan discrepancy in user suspension rates stems from “a political asymmetry in misinformation sharing.” Some Republicans have decried such claims, arguing that “misinformation” as defined by professional fact-checkers is inherently biased, but researchers found a similar pattern when using the evaluations of politically balanced groups of laypeople.

Both sides misunderstand the First Amendment, experts claim

But setting aside the controversies about Big Tech’s so-called censorship, the more relevant question in regard to Section 230 is whether it even matters. Section 230 protections are not contingent on political neutrality.

Following Trump’s call to “revoke 230” after Twitter disabled engagement with one of his tweets, Ashkhen Kazaryan, then director of civil liberties at TechFreedom, argued that such a measure would itself constitute a free speech violation.

“The First Amendment protects Twitter from Trump,” she said. “It does not protect Trump from Twitter.”

This key distinction is often absent from fearmongering, but has been repeatedly upheld by the courts.

In a 2017 lawsuit against YouTube, right-wing advocacy organization Prager University claimed that the platform violated the First Amendment by restricting and demonetizing certain PragerU videos. An appellate court upheld the case’s dismissal, writing that “despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment.”

Similar misconceptions about Section 230 and the First Amendment have occurred on the other side of the aisle as well.

In 2020, Biden argued that major tech platforms should lose Section 230 protections because they were “propagating falsehoods they know to be false.” He repeated the call to repeal Section 230 in 2022, saying that the move would “hold social media platforms accountable for spreading hate and fueling violence.”

These claims imply that in the absence of Section 230, tech companies would suddenly become liable for all misinformation and harassment appearing on their platforms. But this is not the case.

“Nearly all misinformation is protected by the First Amendment,” Techdirt Founder Mike Masnick said in 2021.

However, even though the First Amendment protects most hate speech and misinformation, it cannot prevent a potential flood of meritless litigation. Section 230 grants defendants substantial procedural benefits, paving the way for quick dismissals.

These early dismissals still play an important role in preventing what law professor Eric Goldman has termed “‘collateral censorship’: the proactive removal of legitimate content as a prophylactic way of reducing potential legal risk and the associated potential defense costs.”

In a 2019 essay, Goldman cautioned that proposals aimed at tackling online hate speech by reforming Section 230 could severely backfire, primarily harming marginalized communities. Other supporters of Section 230 have warned that its removal would stifle lawful speech.

Speaking at a Feb. 16 press briefing in advance of the Gonzalez arguments, Caitlin Vogus, deputy director of the Center for Democracy & Technology’s Free Expression Project, argued that such fears are not entirely hypothetical.

“In the wake of SESTA/FOSTA, a law that was passed in 2018 to amend Section 230 to target online sex trafficking, we saw certain services—like Tumblr, for example—bar all adult content on their sites out of fear of liability,” Vogus said.  Likewise, a liability-driven ban on all discussion of terrorism “would sweep up a lot of constitutionally protected and beneficial speech too,” she said.

Section 230 enables moderation and innovation, industry says

As they fight to keep Section 230 protections, tech industry leaders have warned that proposed reforms could lead to unintended consequences for both Republicans and Democrats.

Section 230 is what enables social media platforms to “work rigorously [and] religiously to keep that stuff off of their sites,” said Robert Atkinson, president of the Information Technology and Innovation Foundation, in January.

Other executives warned that changing Section 230 could hamper innovation, further concentrating power in the largest tech companies. Up and coming platforms excessively remove content to avoid exposing themselves to frivolous lawsuits, said Linda Moore, CEO of the internet industry trade group TechNet.

In the absence of Section 230, Chamber of Progress CEO Adam Kovacevich argued that “platforms would have a choice between being Disneyland—a very sanitized environment—or a wasteland.”

“If you choose Disneyland, then a lot of conservative content is going to get pushed off platforms altogether—which I think is not really well understood by conservatives—and if you have a wasteland, consumers are just going to give up on social media,” Kovacevich said.

“There’s a reason why 4chan is not popular,” he added, referring to an anonymous online forum with minimal moderation of its content. The reason? 4chan is a cesspool of Neo-Nazi recruitment, pornography and explicit threats of violence toward minority groups.

Without content moderation, platforms would not only see a proliferation of “personal bullying, neo-Nazi screeds, terrorist beheadings and child sexual abuse,” but also be inundated with spam, law professor Paul Barrett wrote in a 2020 report.

In the fourth quarter of 2022, Facebook reported removing 1.8 billion pieces of spam and 1.3 billion fake accounts—together compromising the vast majority of all removed content. Other moderation efforts were directed at 29.2 million pieces of content containing adult nudity and sexual activity, 25.2 million pieces of content containing child endangerment and exploitation, and 15.5 million pieces of graphically violent content.

By comparison, the content categories often emphasized in moderation debates made up a relatively small share of total content removal: In that same time period, Facebook reportedly took action on 11 million pieces of content it termed “hate speech” and 6.4 million pieces that were termed “bullying and harassment.”

The massive amount of user-generated content added to online platforms every single day—including an average of 500 million new tweets, 700 million Facebook comments and 720,000 hours of YouTube videos—makes it practically inevitable that some content will fall through the cracks. As a result, even narrowly targeted changes to Section 230 could motivate platforms to abandon moderation altogether for fear of liability.

Where does Section 230 go from here?

In spite of all the attention garnered by Gonzalez and Taamneh, many experts believe that the Supreme Court is unlikely to make significant changes to Section 230 in the near future. During the Gonzalez oral arguments, Justices Elena Kagan and Brett Kavanaugh both directly suggested that the issue might be better left to Congress.

Congress has certainly taken steps toward Section 230 reform; between 2020 and 2022, members introduced dozens of bills that would impact the statue, according to Slate’s Section 230 Reform Hub. Among them were several bids for a complete repeal, as well as a broad range of proposals that would either limit its scope or impose new obligations.

Although Congressional hostility toward Section 230 only seems to be growing, bipartisan compromise appears as far away as ever.

Content moderation disputes are sometimes attributed to straightforward disagreement over what constitutes misinformation. The reality is complicated by a deep ideological divide about speech governance. A January study found that even when participants from both parties agreed that the headlines were inaccurate, Democrats were nearly twice as likely as Republicans to think the content should be removed, while Republicans were nearly twice as likely as Democrats to consider such removal censorship.

These findings “suggest that settling factual disagreements will not resolve partisan conflict over content moderation,” the researchers wrote.

In the absence of national movement, states will likely continue to fill in the gaps with measures such as the Texas and Florida “must carry” laws. California and New York have already passed their own controversial social media laws, aimed at combatting hate speech through increased platform transparency.

Speaking at State of the Net on March 6, Digital Progress Institute President Joel Thayer suggested that this patchwork of state legislation—and the ensuing compliance issues for online platforms with a national presence—might finally provide the “huge political incentives” necessary to motivate bipartisan collaboration.