Changes to Section 230 Might Lead To Removal of Legitimate Speech, Subtract from Historical Record

March 17, 2021 – Changes to Section 230 of the Communications Decency Act may not lead to the solution that America wants or needs, said panelists during a South by Southwest event Tuesday. Section 230 grants immunity to social media platforms for user-generated content. It’s an increasingly visible

Changes to Section 230 Might Lead To Removal of Legitimate Speech, Subtract from Historical Record
Screenshot taken from South By Southwest event

March 17, 2021 – Changes to Section 230 of the Communications Decency Act may not lead to the solution that America wants or needs, said panelists during a South by Southwest event Tuesday.

Section 230 grants immunity to social media platforms for user-generated content. It’s an increasingly visible topic before Congress, in the media and in the tech industry as many are concerned about the spread of misinformation and extremism. Like many others, the panel agreed that something needs to change, but the answer to that is not clear.

For Kate Ruane, senior legislative counsel at the American Civil Liberties Union, one of the main concerns is data algorithms that tech companies use for moderating content. They’ll build systems “that will identify speech that is ‘bad’ or could create a liability risk, they will build those programs and just run them automatically, and there will be no human review of it. And what’s going to happen is far, far more over-moderation than anybody intends,” she said.

Marginalized communities and speech that is considered outside the mainstream will be targeted, she explained. “Speech like that is speech that we want,” she said. “We don’t get marriage equality without speech like that, we don’t get Black Lives Matter without speech like that,” she said.

Rather than changing Section 230, Ruane targeted the core business model for big tech companies like Google, Facebook and Twitter. “Actually go after the business model of these companies, which is to collect your data, and market you as the product,” she said. If we can interrupt that, we’re moving in the right direction, she said.

Steven Rosenbaum, managing director at NYC Media Lab, said that disturbing online content is good revenue for social media platforms because users are drawn to it like people driving past a car accident. But these companies need to address the philosophical question of whether they want to support the amplification of this type of content, he said.

In recent months, social media companies have engaged in de-platforming users, including the Twitter ban of former-President Donald Trump after a group of his supporters caused a riot at the U.S. Capitol on January 6, and Amazon Web Services shut down servers for the conservative social media site Parler. But many other instances have happened over the years, such as AWS shutting down the news publication site Wikileaks in 2010 and various social media platforms collectively targeting ISIS in 2015.

Facebook also suspended Trump’s account, but that action is currently under review by the company’s new oversight board—a committee formed in 2020 that is akin to the Supreme Court for Facebook’s content moderation.

Protecting user’s freedom of speech is a concern for many, but Twitter and Facebook are not required to ensure the first amendment rights of their users. Despite that, Ruane said that companies need to be viewpoint-neutral in how they moderate content. “It is very important for platforms like Twitter, FB, YouTube, that are responsible for the speech of billions of people around the world, to be avoiding censorship to the extent that they can, because they are gatekeepers to the public square. And when they moderate content, they often get it wrong,” she said.

Social media has been a medium for recruiting and spreading violent groups, such as ISIS and, more recently, far-right extremists. Much of that content has been banned from online platforms, which the panelists agreed was a good thing.

Determining what content is removed can be a challenge though, depending on what type it is, said Amarnath Amarasingam, professor at Queen’s University in Canada. ISIS content was fairly easy to target because a lot of it was branded, he said. But with other content, such as from far-right extremist groups, it is more difficult because those groups don’t have a brand, he said.

But preserving that content in some way is also important for academic and historical reasons, said Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, stressing the importance of documenting human rights. She expressed concern over the loss of content that is being scrapped from the internet that details atrocities and other problems in areas like Syria.

“There is a case to be made even if that material should not be allowed to be publicly posted, that it should still be documented in some way, and the vast majority of it is thrown in the bin of history by content moderators,” she said.

Ruane agreed with York, referring to the Capitol riot as a recent example. “We’re seeing so much evidence, we’re seeing so much documentation of what happened on January 6 being removed from the internet entirely with no sense of whether we will be able to preserve it for research value or for historical value at all,” she said.

Ruane also expressed concern about the lack of transparency from tech companies in their decisions to remove users and content. These platforms are not consistent and are not transparent, she said.

Whether or not de-platforming actually works to limit one’s influence is a major question. The panel said it is possible that users banned from mainstream sites may find other platforms that agree with their sentiments.