Connect with us

Free Speech

Part IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Published

on

Photo of Beto O'Rourke in April 2019 by Gage Skidmore used with permission

WASHINGTON, August 22, 2019 — Lurking at the corners over the renewed debate over Section 230 the Communications Decency Act is this question: Who gets to control the content moderation process surrounding hate speech?

Even as artificial intelligence is playing a greater role in content moderation on the big tech platforms, the public is still reeling from whether content moderation should facilitate free speech or contain harmful speech.

Around the time that Section 230 was passed, most of the discussion surrounding online platforms was based on a “rights framework,” Harvard Law Professor Jonathan Zittrain told Broadband Breakfast. Aside from some limited boundaries against things like active threats, the prevailing attitude was that more speech was always better.

“In the intervening years, in part because of how ubiquitous the internet has become, we’ve seen more of a public health framework,” Zittrain continued. This perspective is concerned less about an individual’s right to speech and more about the harms that such speech could cause.

Misleading information can persuade parents to decide not to vaccinate their children or lead to violence even if the words aren’t a direct incitement, said Zittrain. The public health framework views preventing these harms as an essential part of corporate social responsibility.

Because these contrasting frameworks have such different values and vernaculars, reconciling them into one comprehensive content moderation plan is a nearly impossible task.

What’s the role of artificial intelligence in content moderation?

Another complication in the content moderation debate is that the sheer volume of online content necessitates the use of automated tools — and these tools have some major shortcomings, according to a recent report from New America’s Open Technology Institute.

Algorithmic models are trained on datasets that emphasize particular categories and definitions of speech. These datasets are usually based on English or other Western languages, despite the fact that millions of users speak different languages. Resulting algorithms are capable of identifying certain types of speech but cannot be holistically applied.

In addition, simply training an algorithm to flag certain words or phrases carries the risk of further suppressing voices that are already marginalized. Sometimes, the “toxicity” of a given term is dependent on the identity of the speaker, since many terms that have historically been used as slurs towards certain groups have been reclaimed by those communities while remaining offensive when used by others.

A 2019 academic study found that “existing approaches to toxic language detection have racial biases, and that text alone does not determine offensiveness.” According to the study, tweets using the African American English dialect were twice as likely to be labelled offensive compared to other tweets.

“The academic and tech sector are pushing ahead with saying, ‘let’s create automated tools of hate detection,’ but we need to be more mindful of minority group language that could be considered ‘bad’ by outside members,” said Maarten Sap, one of the researchers behind the study.

AI’s inability to detect nuance, particularly in regard to context and differing global norms, results in tools that are “limited in their ability to detect and moderate content, and this often results in erroneous and overbroad takedowns of user speech, particularly for already marginalized and disproportionately targeted communities,” wrote OTI.

Curatorial context is key: Could other activist groups create their own Facebook algorithm?

The problem is that hate speech is inherently dependent on context. And artificial intelligence, as successful as it may be at many things, is incredibly bad at reading nuanced context. For that matter, even human moderators are not always given the full context of the content that they are reviewing.

Moreover, few internet platforms provide meaningful transparency around how they develop and utilize automated tools for content moderation.

The sheer volume of online content has created a new question about neutrality for digital platforms, Zittrain said. Platforms are now not only responsible for what content is banned versus not banned, but also for what is prioritized.

Each digital platform must have some mechanism for choosing which of millions of things to offer at the top of a feed, leading to a complex curatorial process that is fraught with confusion.

This confusion could potentially be alleviated through more transparency from tech companies, Zittrain said. Platforms could even go a step further by allowing third party individuals and organizations to create their own formulas for populating a feed.

Zittrain envisioned Facebook’s default news feed algorithm as a foundation upon which political parties, activist groups, and prominent social figures could construct their own unique algorithms to determine what news should be presented to users and in what order. Users could then select any combination of proxies to curate their feeds, leading to a more diverse digital ecosystem.

Critics of YouTube say the platform’s autoplay pushes extreme content

But without such a system in place, users are dependent on platforms’ existing algorithms and content moderation policies — and these policies are much criticized.

YouTube’s autoplay function is a particularly egregious offender. A Wall Street Journal report found that it guided users towards increasingly extreme and radical content. For example, if users searched for information on a certain vaccine, autoplay would direct them to anti-vaccination videos.

The popular platform’s approach to content moderation “sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win,’” Northeastern University professor Christo Wilson told the Journal. “But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and the people are being gamed.”

Automated tools work best in combating content that is universally objectionable

Automated tools have been found to be the most successful in cases where there is wide consensus as to what constitutes objectionable content, such as the parameters surrounding child sexual abuse material.

However, many categories of so-called hate speech are far more subjective. Hateful speech can cause damage other than a direct incitement to violence, such as emotional disturbance or psychic trauma with physiological manifestations, former American Civil Liberties Union President Nadine Strossen told NBC in a 2018 interview.

These are real harms and should be acknowledged, Strossen continued, but “loosening up the constraints on government to allow it to punish speech because of those less tangible, more speculative, more indirect harms … will do more harm than good.”

And attempts at forcing tech platforms to implement more stringent content moderation policies by making such policies a requirement for Section 230 eligibility may do more harm than good, experts say.

Democratic presidential candidate Beto O’Rourke’s newly unveiled plan to do just that would ultimately result in a ‘block first, ask questions later’ mentality, said Free Press Senior Policy Counsel Carmen Scurato.

“This would likely include the blocking of content from organizations and individuals fighting the spread of racism,” Scurato explained. “Removing this liability exemption could have the opposite effect of O’Rourke’s apparent goals.”

O’Rourke’s unlikely alliance with formal rival Sen. Ted Cruz, R-Texas, to take on Section 230 highlights just how convoluted the discussion over the statue has become.

Because the First Amendment’s guarantee of freedom of speech is a restriction on government action, it doesn’t help individuals critical of “censorship” by private online platforms.

It’s up to the platforms themselves — and the public pressure and marketplace choices within which they operate — to decide where to draw lines over hate speech and objectionable content on social media.

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Development Associate Emily McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for campus publication Student Life. She is a founding board member of Code Open Sesame, an organization that teaches computer skills to underprivileged children in six cities across Southern California.

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Section 230

Judge Rules Exemption Exists in Section 230 for Twitter FOSTA Case

Latest lawsuit illustrates the increasing fragility of Section 230 legal protections.

Published

on

Twitter CEO Jack Dorsey.

August 24, 2021—A California court has allowed a lawsuit to commence against Twitter from two victims of sexual trafficking, who allege the social media company initially refused to remove content that exploited the underaged plaintiffs – and then went viral.

The anonymous plaintiffs allege that they were manipulated into making pornographic videos of themselves through another social media app, Snapchat, after which the videos were posted on Twitter. When the plaintiffs asked Twitter to take down the posts, it refused, and it was only after the Department of Homeland Security got involved that the social media company complied.

At issue in the case is whether Twitter had any obligation to remove the content at least “immediately” under Section 230 of the Communications Decency Act, which provides legal liability protections for the content the platforms’ users post.

Court’s finding

The court ruled Thursday that the case should proceed after finding that Twitter knowingly knew such content was on the site, had to have known it was sex trafficking, and refused to do something about it immediately.

“The Court finds that these allegations are sufficient to allege an ongoing pattern of conduct amounting to a tacit agreement with the perpetrators in this case to allow them to post videos and photographs it knew or should have known were related to sex trafficking without blocking their accounts or the Videos,” the decision read.

“In sum, the Court finds that Plaintiffs have stated a claim for civil liability under the [Trafficking Victims Protection Reauthorization Act] on the basis of beneficiary liability and that the claim falls within the exemption to Section 230 immunity created by FOSTA.”

The Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act that became the package law SESTA-FOSTA was passed in 2018 and amended immunity claims under Section 230 to exclude enforcement of federal or state sex trafficking laws from intermediary protections.

The court dismissed other claims against the company made by the plaintiffs, but met the relatively low bar to move the case forward.

The arguments

The plaintiffs allege that Twitter violated the TVPRA because it allegedly knew about the videos, benefitted from them and did nothing to address the problem before it went viral.

Twitter argued that FOSTA, as applied to the CDA, only narrowly applies to websites that are “knowingly assisting and profiting from reprehensible crimes;” the plaintiffs allegedly fail to show that the company “affirmatively participated” in such crimes; and the company cannot be held liable “simply because it did not take the videos down immediately.”

Experts asserted companies may hesitate to bring Section 230 defense in court

The case is yet another instance of U.S. courts increasingly poking holes in arguments brought by technology companies that suggests they cannot be liable for content on their platforms, per Section 230, which is currently the subject of hot debate in Washington about whether to reform it or completely abolish it.

A number of state judges have ruled against Amazon, for example, and its Section 230 defense in a number of case-specific instances in Texas and California. Experts on a panel in May said if courts keep ruling against the defense, there may be a deluge of lawsuits to come against companies.

And last month, citing some of these cases, lawyers argued that big tech companies may begin to shy away from bringing the 230 defense to court in fear of awakening lawmakers to changing legal views on the provision that could ignite its reform.

Continue Reading

Section 230

Facebook, Google, Twitter Register to Lobby Congress on Section 230

Companies also want to discuss cybersecurity, net neutrality, taxes and privacy.

Published

on

Facebook CEO Mark Zuckerberg

August 3, 2021 — The largest social media companies have registered to lobby Congress on Section 230, according to lobby records.

Facebook, Google, and Twitter filed new paperwork late last month to discuss the internet liability provision under the Communications Decency Act, which protects these companies from legal trouble for content their users post.

Facebook’s registration specifically mentions the Safe Tech Act, an amendment to the provision proposed earlier this year by Sens. Amy Klobuchar, D-Minnesota, Mark Warner, D-Virginia, and Mazie Hirono, D-Hawaii, which would largely keep the provision’s protections except for content the platforms are paid for.

A separate Facebook registration included discussion on the “repeal” of the provision.

Other issues included in the Menlo Park-based company’s registration are privacy, data security, online advertising, and general regulations on the social media industry.

Google also wants to discuss taxes and cybersecurity, as security issues take center stage following high-profile attacks and as international proposals for a new tax regime on tech companies emerge.

Notable additional subject matters Twitter includes in its registration are content moderation practices, data security, misinformation, and net neutrality, as the Federal Communications Commission is being urged to bring back Obama-era policies friendly to the principle that ensures content cannot be given preferential treatment on networks.

Section 230 has gripped Congress

Social media critics have been foaming at the mouth over possible retaliatory measures against the technology companies that have taken increasingly strong measures against those that violate its policies.

Those discussions picked up steam when, at the beginning of the year, former President Donald Trump was banned from Twitter, and then from Facebook and other platforms, for allegedly stoking the Capitol Hill riot on January 6. (Trump has since filed a lawsuit as a private citizen against the social media giants for his removal.)

Since the Capitol riot, a number of proposals have been put forward to amend — in some cases completely repeal — the provision to address what some Republicans are calling outright censorship by social media companies. Even Florida tried to take matters into its own hands when it made law rules that penalized social media companies that banned politicians. That law has since been put on hold by the courts.

The social media giants, and its allies in the industry, have pressed the importance of the provision, which they say have allowed once-fledgling companies like Facebook to be what it is today. And some representatives think reform of the law could lean more toward amendment than outright repeal. But lawyers have warned about a shift in attitude toward those liability protections, as more judges in courts across the country hold big technology companies accountable for harm caused by the platforms.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending