Connect with us

Free Speech

Part IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Published

on

Photo of Beto O'Rourke in April 2019 by Gage Skidmore used with permission

WASHINGTON, August 22, 2019 — Lurking at the corners over the renewed debate over Section 230 the Communications Decency Act is this question: Who gets to control the content moderation process surrounding hate speech?

Even as artificial intelligence is playing a greater role in content moderation on the big tech platforms, the public is still reeling from whether content moderation should facilitate free speech or contain harmful speech.

Around the time that Section 230 was passed, most of the discussion surrounding online platforms was based on a “rights framework,” Harvard Law Professor Jonathan Zittrain told Broadband Breakfast. Aside from some limited boundaries against things like active threats, the prevailing attitude was that more speech was always better.

“In the intervening years, in part because of how ubiquitous the internet has become, we’ve seen more of a public health framework,” Zittrain continued. This perspective is concerned less about an individual’s right to speech and more about the harms that such speech could cause.

Misleading information can persuade parents to decide not to vaccinate their children or lead to violence even if the words aren’t a direct incitement, said Zittrain. The public health framework views preventing these harms as an essential part of corporate social responsibility.

Because these contrasting frameworks have such different values and vernaculars, reconciling them into one comprehensive content moderation plan is a nearly impossible task.

What’s the role of artificial intelligence in content moderation?

Another complication in the content moderation debate is that the sheer volume of online content necessitates the use of automated tools — and these tools have some major shortcomings, according to a recent report from New America’s Open Technology Institute.

Algorithmic models are trained on datasets that emphasize particular categories and definitions of speech. These datasets are usually based on English or other Western languages, despite the fact that millions of users speak different languages. Resulting algorithms are capable of identifying certain types of speech but cannot be holistically applied.

In addition, simply training an algorithm to flag certain words or phrases carries the risk of further suppressing voices that are already marginalized. Sometimes, the “toxicity” of a given term is dependent on the identity of the speaker, since many terms that have historically been used as slurs towards certain groups have been reclaimed by those communities while remaining offensive when used by others.

A 2019 academic study found that “existing approaches to toxic language detection have racial biases, and that text alone does not determine offensiveness.” According to the study, tweets using the African American English dialect were twice as likely to be labelled offensive compared to other tweets.

“The academic and tech sector are pushing ahead with saying, ‘let’s create automated tools of hate detection,’ but we need to be more mindful of minority group language that could be considered ‘bad’ by outside members,” said Maarten Sap, one of the researchers behind the study.

AI’s inability to detect nuance, particularly in regard to context and differing global norms, results in tools that are “limited in their ability to detect and moderate content, and this often results in erroneous and overbroad takedowns of user speech, particularly for already marginalized and disproportionately targeted communities,” wrote OTI.

Curatorial context is key: Could other activist groups create their own Facebook algorithm?

The problem is that hate speech is inherently dependent on context. And artificial intelligence, as successful as it may be at many things, is incredibly bad at reading nuanced context. For that matter, even human moderators are not always given the full context of the content that they are reviewing.

Moreover, few internet platforms provide meaningful transparency around how they develop and utilize automated tools for content moderation.

The sheer volume of online content has created a new question about neutrality for digital platforms, Zittrain said. Platforms are now not only responsible for what content is banned versus not banned, but also for what is prioritized.

Each digital platform must have some mechanism for choosing which of millions of things to offer at the top of a feed, leading to a complex curatorial process that is fraught with confusion.

This confusion could potentially be alleviated through more transparency from tech companies, Zittrain said. Platforms could even go a step further by allowing third party individuals and organizations to create their own formulas for populating a feed.

Zittrain envisioned Facebook’s default news feed algorithm as a foundation upon which political parties, activist groups, and prominent social figures could construct their own unique algorithms to determine what news should be presented to users and in what order. Users could then select any combination of proxies to curate their feeds, leading to a more diverse digital ecosystem.

Critics of YouTube say the platform’s autoplay pushes extreme content

But without such a system in place, users are dependent on platforms’ existing algorithms and content moderation policies — and these policies are much criticized.

YouTube’s autoplay function is a particularly egregious offender. A Wall Street Journal report found that it guided users towards increasingly extreme and radical content. For example, if users searched for information on a certain vaccine, autoplay would direct them to anti-vaccination videos.

The popular platform’s approach to content moderation “sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win,’” Northeastern University professor Christo Wilson told the Journal. “But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and the people are being gamed.”

Automated tools work best in combating content that is universally objectionable

Automated tools have been found to be the most successful in cases where there is wide consensus as to what constitutes objectionable content, such as the parameters surrounding child sexual abuse material.

However, many categories of so-called hate speech are far more subjective. Hateful speech can cause damage other than a direct incitement to violence, such as emotional disturbance or psychic trauma with physiological manifestations, former American Civil Liberties Union President Nadine Strossen told NBC in a 2018 interview.

These are real harms and should be acknowledged, Strossen continued, but “loosening up the constraints on government to allow it to punish speech because of those less tangible, more speculative, more indirect harms … will do more harm than good.”

And attempts at forcing tech platforms to implement more stringent content moderation policies by making such policies a requirement for Section 230 eligibility may do more harm than good, experts say.

Democratic presidential candidate Beto O’Rourke’s newly unveiled plan to do just that would ultimately result in a ‘block first, ask questions later’ mentality, said Free Press Senior Policy Counsel Carmen Scurato.

“This would likely include the blocking of content from organizations and individuals fighting the spread of racism,” Scurato explained. “Removing this liability exemption could have the opposite effect of O’Rourke’s apparent goals.”

O’Rourke’s unlikely alliance with formal rival Sen. Ted Cruz, R-Texas, to take on Section 230 highlights just how convoluted the discussion over the statue has become.

Because the First Amendment’s guarantee of freedom of speech is a restriction on government action, it doesn’t help individuals critical of “censorship” by private online platforms.

It’s up to the platforms themselves — and the public pressure and marketplace choices within which they operate — to decide where to draw lines over hate speech and objectionable content on social media.

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Reporter Em McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for the student newspaper. In addition to agency and freelance marketing experience, she has reported extensively on Section 230, big tech, and rural broadband access. She is a founding board member of Code Open Sesame, an organization that teaches computer programming skills to underprivileged children.

Section 230

Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case

Panelists disagreed over the merits of Section 230’s protections and the extent to which they apply.

Published

on

Screenshot of speakers at the Federalist Society webinar

WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

Continue Reading

Section 230

Supreme Court Seeks Biden Administration’s Input on Texas and Florida Social Media Laws

The court has not yet agreed to hear the cases, but multiple justices have commented on their importance.

Published

on

Photo of Solicitor General Elizabeth Prelogar courtesy of the U.S. Department of Justice

WASHINGTON, January 24, 2023 — The Supreme Court on Monday asked for the Joe Biden administration’s input on a pair of state laws that would prevent social media platforms from moderating content based on viewpoint.

The Republican-backed laws in Texas and Florida both stem from allegations that tech companies are censoring conservative speech. The Texas law would restrict platforms with at least 50 million users from removing or demonetizing content based on “viewpoint.” The Florida law places significant restrictions on platforms’ ability to remove any content posted by members of certain groups, including politicians.

Two trade groups — NetChoice and the Computer & Communications Industry Association — jointly challenged both laws, meeting with mixed results in appeals courts. They, alongside many tech companies, argue that the law would violate platforms’ First Amendment right to decide what speech to host.

Tech companies also warn that the laws would force them to disseminate objectionable and even dangerous content. In an emergency application to block the Texas law from going into effect in May, the trade groups wrote that such content could include “Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders,”

The Supreme Court has not yet agreed to hear the cases, but multiple justices have commented on the importance of the issue.

In response to the emergency application in May, Justice Samuel Alito wrote that the case involved “issues of great importance that will plainly merit this Court’s review.” However, he disagreed with the court’s decision to block the law pending review, writing that “whether applicants are likely to succeed under existing law is quite unclear.”

Monday’s request asking Solicitor General Elizabeth Prelogar to weigh in on the cases allows the court to put off the decision for another few months.

“It is crucial that the Supreme Court ultimately resolve this matter: it would be a dangerous precedent to let government insert itself into the decisions private companies make on what material to publish or disseminate online,” CCIA President Matt Schruers said in a statement. “The First Amendment protects both the right to speak and the right not to be compelled to speak, and we should not underestimate the consequences of giving government control over online speech in a democracy.”

The Supreme Court is still scheduled to hear two other major content moderation cases next month, which will decide whether Google and Twitter can be held liable for terrorist content hosted on their respective platforms.

Continue Reading

Section 230

Google Defends Section 230 in Supreme Court Terror Case

‘Section 230 is critical to enabling the digital sector’s efforts to respond to extremist[s],’ said a tech industry supporter.

Published

on

Photo of ISIS supporter by HatabKhurasani from Wikipedia

WASHINGTON, January 13, 2023 – The Supreme Court could trigger a cascade of internet-altering effects that will encourage the proliferation of offensive speech and the suppression of speech and create a “litigation minefield” if it decides Google is liable for the results of terrorist attacks by entities publishing on its YouTube platform, the search engine company argued Thursday.

The high court will hear the case of an America family whose daughter Reynaldo Gonzalez was killed in an ISIS terrorist attack in Paris in 2015. The family sued Google under the AntiTerrorism Act for the death, alleging YouTube participated as a publisher of ISIS recruitment videos when it hosted them and its algorithm shared them on the video platform.

But in a brief to the court on Thursday, Google said it is not liable for the content published by third parties on its website according to Section 230 of the Communications Decency Act, and that deciding otherwise would effectively gut platform protection provision and “upend the internet.”

Denying the provision’s protections for platforms “could have devastating spillover effects,” Google argued in the brief. “Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user. If plaintiffs could evade Section 230(c)(1) by targeting how websites sort content or trying to hold users liable for liking or sharing articles, the internet would devolve into a disorganized mess and a litigation minefield.”

It would also “perversely encourage both wide-ranging suppression of speech and the proliferation of more offensive speech,” it added in the brief. “Sites with the resources to take down objectionable content could become beholden to heckler’s vetoes, removing anything anyone found objectionable.

“Other sites, by contrast, could take the see-no-evil approach, disabling all filtering to avoid any inference of constructive knowledge of third-party content,” Google added. “Still other sites could vanish altogether.”

Google rejected the argument that recommendations by its algorithms conveys an “implicit message,” arguing that in such a world, “any organized display [as algorithms do] of content ‘implicitly’ recommends that content and could be actionable.”

The Supreme Court is also hearing a similar case simultaneously in Twitter v. Taamneh.

The Section 230 scrutiny has loomed large since former President Donald Trump was banned from social media platforms for allegedly inciting the Capitol Hill riots in January 2021. Trump and conservatives called for rules limited that protection in light of the suspensions and bans, while the Democrats have not shied away from introducing legislation limited the provision if certain content continued to flourish on those platforms.

Supreme Court Justice Clarence Thomas early last year issued a statement calling for a reexamination of tech platform immunity protections following a Texas Supreme Court decision that said Facebook was shielded from liability in a trafficking case.

Meanwhile, startups and internet associations have argued for the preservation of the provision.

“These cases underscore how important it is that digital services have the resources and the legal certainty to deal with dangerous content online,” Matt Schruers, president of the Computer and Communications Industry Association, said in a statement when the Supreme Court decided in October to hear the Gonzalez case.

“Section 230 is critical to enabling the digital sector’s efforts to respond to extremist and violent rhetoric online,” he added, “and these cases illustrate why it is essential that those efforts continue.”

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending