Part IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms
WASHINGTON, August 22, 2019 — Lurking at the corners over the renewed debate over Section 230 the Communications Decency Act is this question: Who gets to control the content moderation process surrounding hate speech? Even as artificial intelligence is playing a greater role in content moderation o
Em McPhie
WASHINGTON, August 22, 2019 — Lurking at the corners over the renewed debate over Section 230 the Communications Decency Act is this question: Who gets to control the content moderation process surrounding hate speech?
Even as artificial intelligence is playing a greater role in content moderation on the big tech platforms, the public is still reeling from whether content moderation should facilitate free speech or contain harmful speech.
Around the time that Section 230 was passed, most of the discussion surrounding online platforms was based on a “rights framework,” Harvard Law Professor Jonathan Zittrain told Broadband Breakfast. Aside from some limited boundaries against things like active threats, the prevailing attitude was that more speech was always better.
“In the intervening years, in part because of how ubiquitous the internet has become, we’ve seen more of a public health framework,” Zittrain continued. This perspective is concerned less about an individual’s right to speech and more about the harms that such speech could cause.
Misleading information can persuade parents to decide not to vaccinate their children or lead to violence even if the words aren’t a direct incitement, said Zittrain. The public health framework views preventing these harms as an essential part of corporate social responsibility.
Because these contrasting frameworks have such different values and vernaculars, reconciling them into one comprehensive content moderation plan is a nearly impossible task.
What’s the role of artificial intelligence in content moderation?
Another complication in the content moderation debate is that the sheer volume of online content necessitates the use of automated tools — and these tools have some major shortcomings, according to a recent report from New America’s Open Technology Institute.
Algorithmic models are trained on datasets that emphasize particular categories and definitions of speech. These datasets are usually based on English or other Western languages, despite the fact that millions of users speak different languages. Resulting algorithms are capable of identifying certain types of speech but cannot be holistically applied.
In addition, simply training an algorithm to flag certain words or phrases carries the risk of further suppressing voices that are already marginalized. Sometimes, the “toxicity” of a given term is dependent on the identity of the speaker, since many terms that have historically been used as slurs towards certain groups have been reclaimed by those communities while remaining offensive when used by others.
A 2019 academic study found that “existing approaches to toxic language detection have racial biases, and that text alone does not determine offensiveness.” According to the study, tweets using the African American English dialect were twice as likely to be labelled offensive compared to other tweets.
“The academic and tech sector are pushing ahead with saying, ‘let’s create automated tools of hate detection,’ but we need to be more mindful of minority group language that could be considered ‘bad’ by outside members,” said Maarten Sap, one of the researchers behind the study.
AI’s inability to detect nuance, particularly in regard to context and differing global norms, results in tools that are “limited in their ability to detect and moderate content, and this often results in erroneous and overbroad takedowns of user speech, particularly for already marginalized and disproportionately targeted communities,” wrote OTI.
Curatorial context is key: Could other activist groups create their own Facebook algorithm?
The problem is that hate speech is inherently dependent on context. And artificial intelligence, as successful as it may be at many things, is incredibly bad at reading nuanced context. For that matter, even human moderators are not always given the full context of the content that they are reviewing.
Moreover, few internet platforms provide meaningful transparency around how they develop and utilize automated tools for content moderation.
The sheer volume of online content has created a new question about neutrality for digital platforms, Zittrain said. Platforms are now not only responsible for what content is banned versus not banned, but also for what is prioritized.
Each digital platform must have some mechanism for choosing which of millions of things to offer at the top of a feed, leading to a complex curatorial process that is fraught with confusion.
This confusion could potentially be alleviated through more transparency from tech companies, Zittrain said. Platforms could even go a step further by allowing third party individuals and organizations to create their own formulas for populating a feed.
Zittrain envisioned Facebook’s default news feed algorithm as a foundation upon which political parties, activist groups, and prominent social figures could construct their own unique algorithms to determine what news should be presented to users and in what order. Users could then select any combination of proxies to curate their feeds, leading to a more diverse digital ecosystem.
Critics of YouTube say the platform’s autoplay pushes extreme content
But without such a system in place, users are dependent on platforms’ existing algorithms and content moderation policies — and these policies are much criticized.
YouTube’s autoplay function is a particularly egregious offender. A Wall Street Journal report found that it guided users towards increasingly extreme and radical content. For example, if users searched for information on a certain vaccine, autoplay would direct them to anti-vaccination videos.
The popular platform’s approach to content moderation “sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win,’” Northeastern University professor Christo Wilson told the Journal. “But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and the people are being gamed.”
Automated tools work best in combating content that is universally objectionable
Automated tools have been found to be the most successful in cases where there is wide consensus as to what constitutes objectionable content, such as the parameters surrounding child sexual abuse material.
However, many categories of so-called hate speech are far more subjective. Hateful speech can cause damage other than a direct incitement to violence, such as emotional disturbance or psychic trauma with physiological manifestations, former American Civil Liberties Union President Nadine Strossen told NBC in a 2018 interview.
These are real harms and should be acknowledged, Strossen continued, but “loosening up the constraints on government to allow it to punish speech because of those less tangible, more speculative, more indirect harms … will do more harm than good.”
And attempts at forcing tech platforms to implement more stringent content moderation policies by making such policies a requirement for Section 230 eligibility may do more harm than good, experts say.
Democratic presidential candidate Beto O’Rourke’s newly unveiled plan to do just that would ultimately result in a ‘block first, ask questions later’ mentality, said Free Press Senior Policy Counsel Carmen Scurato.
“This would likely include the blocking of content from organizations and individuals fighting the spread of racism,” Scurato explained. “Removing this liability exemption could have the opposite effect of O’Rourke’s apparent goals.”
O’Rourke’s unlikely alliance with formal rival Sen. Ted Cruz, R-Texas, to take on Section 230 highlights just how convoluted the discussion over the statue has become.
Because the First Amendment’s guarantee of freedom of speech is a restriction on government action, it doesn’t help individuals critical of “censorship” by private online platforms.
It’s up to the platforms themselves — and the public pressure and marketplace choices within which they operate — to decide where to draw lines over hate speech and objectionable content on social media.
Section I: The Communications Decency Act is Born
Section II: How Section 230 Builds on and Supplements the First Amendment
Section III: What Does the Fairness Doctrine Have to Do With the Internet?
Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms