Connect with us

Free Speech

Part IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Published

on

Photo of Beto O'Rourke in April 2019 by Gage Skidmore used with permission

WASHINGTON, August 22, 2019 — Lurking at the corners over the renewed debate over Section 230 the Communications Decency Act is this question: Who gets to control the content moderation process surrounding hate speech?

Even as artificial intelligence is playing a greater role in content moderation on the big tech platforms, the public is still reeling from whether content moderation should facilitate free speech or contain harmful speech.

Around the time that Section 230 was passed, most of the discussion surrounding online platforms was based on a “rights framework,” Harvard Law Professor Jonathan Zittrain told Broadband Breakfast. Aside from some limited boundaries against things like active threats, the prevailing attitude was that more speech was always better.

“In the intervening years, in part because of how ubiquitous the internet has become, we’ve seen more of a public health framework,” Zittrain continued. This perspective is concerned less about an individual’s right to speech and more about the harms that such speech could cause.

Misleading information can persuade parents to decide not to vaccinate their children or lead to violence even if the words aren’t a direct incitement, said Zittrain. The public health framework views preventing these harms as an essential part of corporate social responsibility.

Because these contrasting frameworks have such different values and vernaculars, reconciling them into one comprehensive content moderation plan is a nearly impossible task.

What’s the role of artificial intelligence in content moderation?

Another complication in the content moderation debate is that the sheer volume of online content necessitates the use of automated tools — and these tools have some major shortcomings, according to a recent report from New America’s Open Technology Institute.

Algorithmic models are trained on datasets that emphasize particular categories and definitions of speech. These datasets are usually based on English or other Western languages, despite the fact that millions of users speak different languages. Resulting algorithms are capable of identifying certain types of speech but cannot be holistically applied.

In addition, simply training an algorithm to flag certain words or phrases carries the risk of further suppressing voices that are already marginalized. Sometimes, the “toxicity” of a given term is dependent on the identity of the speaker, since many terms that have historically been used as slurs towards certain groups have been reclaimed by those communities while remaining offensive when used by others.

A 2019 academic study found that “existing approaches to toxic language detection have racial biases, and that text alone does not determine offensiveness.” According to the study, tweets using the African American English dialect were twice as likely to be labelled offensive compared to other tweets.

“The academic and tech sector are pushing ahead with saying, ‘let’s create automated tools of hate detection,’ but we need to be more mindful of minority group language that could be considered ‘bad’ by outside members,” said Maarten Sap, one of the researchers behind the study.

AI’s inability to detect nuance, particularly in regard to context and differing global norms, results in tools that are “limited in their ability to detect and moderate content, and this often results in erroneous and overbroad takedowns of user speech, particularly for already marginalized and disproportionately targeted communities,” wrote OTI.

Curatorial context is key: Could other activist groups create their own Facebook algorithm?

The problem is that hate speech is inherently dependent on context. And artificial intelligence, as successful as it may be at many things, is incredibly bad at reading nuanced context. For that matter, even human moderators are not always given the full context of the content that they are reviewing.

Moreover, few internet platforms provide meaningful transparency around how they develop and utilize automated tools for content moderation.

The sheer volume of online content has created a new question about neutrality for digital platforms, Zittrain said. Platforms are now not only responsible for what content is banned versus not banned, but also for what is prioritized.

Each digital platform must have some mechanism for choosing which of millions of things to offer at the top of a feed, leading to a complex curatorial process that is fraught with confusion.

This confusion could potentially be alleviated through more transparency from tech companies, Zittrain said. Platforms could even go a step further by allowing third party individuals and organizations to create their own formulas for populating a feed.

Zittrain envisioned Facebook’s default news feed algorithm as a foundation upon which political parties, activist groups, and prominent social figures could construct their own unique algorithms to determine what news should be presented to users and in what order. Users could then select any combination of proxies to curate their feeds, leading to a more diverse digital ecosystem.

Critics of YouTube say the platform’s autoplay pushes extreme content

But without such a system in place, users are dependent on platforms’ existing algorithms and content moderation policies — and these policies are much criticized.

YouTube’s autoplay function is a particularly egregious offender. A Wall Street Journal report found that it guided users towards increasingly extreme and radical content. For example, if users searched for information on a certain vaccine, autoplay would direct them to anti-vaccination videos.

The popular platform’s approach to content moderation “sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win,’” Northeastern University professor Christo Wilson told the Journal. “But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and the people are being gamed.”

Automated tools work best in combating content that is universally objectionable

Automated tools have been found to be the most successful in cases where there is wide consensus as to what constitutes objectionable content, such as the parameters surrounding child sexual abuse material.

However, many categories of so-called hate speech are far more subjective. Hateful speech can cause damage other than a direct incitement to violence, such as emotional disturbance or psychic trauma with physiological manifestations, former American Civil Liberties Union President Nadine Strossen told NBC in a 2018 interview.

These are real harms and should be acknowledged, Strossen continued, but “loosening up the constraints on government to allow it to punish speech because of those less tangible, more speculative, more indirect harms … will do more harm than good.”

And attempts at forcing tech platforms to implement more stringent content moderation policies by making such policies a requirement for Section 230 eligibility may do more harm than good, experts say.

Democratic presidential candidate Beto O’Rourke’s newly unveiled plan to do just that would ultimately result in a ‘block first, ask questions later’ mentality, said Free Press Senior Policy Counsel Carmen Scurato.

“This would likely include the blocking of content from organizations and individuals fighting the spread of racism,” Scurato explained. “Removing this liability exemption could have the opposite effect of O’Rourke’s apparent goals.”

O’Rourke’s unlikely alliance with formal rival Sen. Ted Cruz, R-Texas, to take on Section 230 highlights just how convoluted the discussion over the statue has become.

Because the First Amendment’s guarantee of freedom of speech is a restriction on government action, it doesn’t help individuals critical of “censorship” by private online platforms.

It’s up to the platforms themselves — and the public pressure and marketplace choices within which they operate — to decide where to draw lines over hate speech and objectionable content on social media.

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Free Speech

Experts Reflect on Supreme Court Decision to Block Texas Social Media Bill

Observers on a Broadband Breakfast panel offered differing perspectives on the high court’s decision.

Published

on

Parler CPO Amy Peikoff

WASHINGTON, June 2, 2022 – Experts hosted by Broadband Breakfast Wednesday were split on what to make of  the Supreme Court’s 5-4 decision to reverse a lower court order lifting a ban on a Texas social media law that would have made it illegal for certain large platforms to crack down on speech they deem reprehensible.

The decision keeps the law from taking affect until a full determination is made by a lower court.

Parler CPO Amy Peikoff

During a Broadband Live Online event on Wednesday, Ari Cohn, free speech counsel for tech lobbyist TechFreedom, argued that the bill “undermines the First Amendment to protect the values of free speech.

“We have seen time and again over the course of history that when you give the government power to start encroaching on editorial decisions [it will] never go away, it will only grow stronger,” he cautioned. “It will inevitably be abused by whoever is in power.”

Nora Benavidez, senior counsel and director of digital justice and civil rights for advocate Free Press, agreed with Cohn. “This is a state effort to control what private entities do,” she said Wednesday. “That is unconstitutional.

“When government attempts to invade into private action that is deeply problematic,” Benavidez continued. “We can see hundreds and hundreds of years of examples of where various countries have inserted themselves into private actions – that leads to authoritarianism, that leads to censorship.”

Different perspectives

Principal at McCollough Law Firm Scott McCollough said Wednesday  that he believed the law should have been allowed to stand.

“I agree the government should not be picking and choosing who gets to speak and who does not,” he said. “The intent behind the Texas statute was to prevent anyone from being censored – regardless of viewpoint, no matter what [the viewpoint] is.”

McCollough argued that this case was about which free speech values supersede the other – “those of the platforms, or those of the people who feel that they are being shut out from what is today the public square.

“In the end it will be a court that acts, and the court is also the state,” McCollough added. “So, in that respect, the state would still be weighing in on who wins and who loses – who gets to speak and who does not.”

Chief policy officer of social media platform Parler Amy Peikoff said Wednesday that her primary concern was “viewpoint discrimination in favor of the ruling elite.”

Peikoff was particularly concerned about coordination between state agencies and social media platforms to “squelch certain viewpoints.”

Peikoff clarified that she did not believe that the Texas law was the best vehicle to address these concerns, however, stating instead that lawsuits – preferably private ones – be used to remove the “censorious cancer,” rather than entangling a government entity in the matter.

“This cancer grows out of a partnership between government and social media to squelch discussion about certain viewpoints and perspectives.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, June 1, 2022, 12 Noon ET – BREAKING NEWS EVENT! – The Supreme Court, Social Media and the Culture Wars

The Supreme Court on Tuesday blocked a Texas law that would ban large social media companies from removing posts based on the views they express. Join us for this breaking news event of Broadband Breakfast Live Online in which we discuss the Supreme Court, social media and the culture wars.

Panelists:

  • Scott McCollough, Attorney, McCollough Law Firm
  • Amy Peikoff, Chief Policy Officer, Parler
  • Ari Cohn, Free Speech Counsel, TechFreedom
  • Nora Benavidez, Senior Counsel and Director of Digital Justice and Civil Rights at Free Press
  • Drew Clark (presenter and host), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Amy Peikoff is the Chief Policy Officer of Parler. After completing her Ph.D., she taught at universities (University of Texas, Austin, University of North Carolina, Chapel Hill, United States Air Force Academy) and law schools (Chapman, Southwestern), publishing frequently cited academic articles on privacy law, as well as op-eds in leading newspapers across the country on a range of issues. Just prior to joining Parler, she founded and was President of the Center for the Legalization of Privacy, which submitted an amicus brief in United States v. Facebook in 2019.

Ari Cohn is Free Speech Counsel at TechFreedom. A nationally recognized expert in First Amendment law, he was previously the Director of the Individual Rights Defense Program at the Foundation for Individual Rights in Education (FIRE), and has worked in private practice at Mayer Brown LLP and as a solo practitioner, and was an attorney with the U.S. Department of Education’s Office for Civil Rights. Ari graduated cum laude from Cornell Law School, and earned his Bachelor of Arts degree from the University of Illinois at Urbana-Champaign.

Nora Benavidez manages Free Press’s efforts around platform and media accountability to defend against digital threats to democracy. She previously served as the director of PEN America’s U.S. Free Expression Programs, where she guided the organization’s national advocacy agenda on First Amendment and free-expression issues, including press freedom, disinformation defense and protest rights. Nora launched and led PEN America’s media-literacy and disinformation-defense program. She also led the organization’s groundbreaking First Amendment lawsuit, PEN America v. Donald Trump, to hold the former president accountable for his retaliation against and censorship of journalists he disliked.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

Photo of the Supreme Court from September 2020 by Aiva.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Section 230

Narrow Majority of Supreme Court Blocks Texas Law Regulating Social Media Platforms

The decision resulted in an unusual court split. Justice Kagan sided with Justice Alito but refused to sign his dissent.

Published

on

Caricature of Samuel Alito by Donkey Hotey used with permission

WASHINGTON, May 31, 2022 – On a narrow 5-4 vote, the Supreme Court of the United States on Tuesday blocked a Texas law that Republicans had argued would address the “censorship” of conservative voices on social media platforms.

Texas H.B. 20 was written by Texas Republicans to combat perceived bias against conservative viewpoints voiced on Facebook, Twitter, and other social media platforms with at least 50 million active monthly users.

Watch Broadband Breakfast Live Online on Wednesday, June 1, 2022

Broadband Breakfast on June 1, 2022 — The Supreme Court, Social Media and the Culture Wars

The bill was drafted at least in part as a reaction to President Donald Trump’s ban from social media. Immediately following the January 6 riots at the United States Capitol, Trump was simultaneously banned on several platforms and online retailers, including Amazon, Facebook, Twitter, Reddit, and myriad other websites.

See also Explainer: With Florida Social Media Law, Section 230 Now Positioned In Legal Spotlight, Broadband Breakfast, May 25, 2021

Close decision on First Amendment principles

A brief six-page dissent on the matter was released on Tuesday. Conservative Justices Samuel Alito, Neil Gorsuch, and Clarence Thomas dissented, arguing that the law should have been allowed to stand. Justice Elena Kagan also agreed that the law should be allowed to stand, though she did not join Alito’s penned dissent and did not elaborate further.

The decision was on an emergency action to vacate a one-sentence decision of the Fifth Circuit Court of Appeals. The appeals court had reversed a prior stay by a federal district court. In other words, the, the law passed by the Texas legislature and signed by Gov. Greg Abbott is precluded from going into effect.

Tech lobbying group NetChoice – in addition to many entities in Silicon Valley – argued that the law would prevent social media platforms from moderating and addressing hateful and potentially inflammatory content.

In a statement, Computer & Communications Industry Association President Matt Schruers said, “We are encouraged that this attack on First Amendment rights has been halted until a court can fully evaluate the repercussions of Texas’s ill-conceived statute.”

“This ruling means that private American companies will have an opportunity to be heard in court before they are forced to disseminate vile, abusive or extremist content under this Texas law. We appreciate the Supreme Court ensuring First Amendment protections, including the right not to be compelled to speak, will be upheld during the legal challenge to Texas’s social media law.”

In a statement, Public Knowledge Legal Director John Bergmayer said, “It is good that the Supreme Court blocked HB 20, the Texas online speech regulation law. But it should have been unanimous. It is alarming that so many policymakers, and even Supreme Court justices, are willing to throw out basic principles of free speech to try to control the power of Big Tech for their own purposes, instead of trying to limit that power through antitrust and other competition policies. Reining in the power of tech giants does not require abandoning the First Amendment.”

In his dissent, Alito pointed out that the plaintiffs argued “HB 20 interferes with their exercise of ‘editorial discretion,’ and they maintain that this interference violates their right ‘not to disseminate speech generated by others.’”

“Under some circumstances, we have recognized the right of organizations to refuse to host the speech of others,” he said, referencing Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc.

“But we have rejected such claims in other circumstances,” he continued, pointing to PruneYard Shopping Center v. Robins.

Will Section 230 be revamped on a full hearing by the Supreme Court?

“It is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies, but Texas argues that its law is permissible under our case law,” Alito said.

Alito argued that there is a distinction between compelling a platform to host a message and refraining from discriminating against a user’s speech “on the basis of viewpoint.” He said that H.B. 20 adopted the latter approach.

Alito went on, arguing that the bill only applied to “platforms that hold themselves out as ‘open to the public,’” and “neutral forums for the speech of others,” and thus, the targeting platforms are not spreading messages they endorse.

Alito added that because the bill only targets platforms with more than 50 million users, it only targets entities with “some measure of common carrier-like market power and that this power gives them an ‘opportunity to shut out [disfavored] speakers.’”

Justices John Roberts, Stephen Breyer, Sonya Sotomayor, Brett Kavanaugh, and Amy Coney Barrett all voted affirmatively – siding with NetChoice LLC’s emergency application – to block H.B. 20 from being enforced.

Continue Reading

Section 230

Parler Policy Exec Hopes ‘Sustainable’ Free Speech Change on Twitter if Musk Buys Platform

Parler’s Amy Peikoff said she wishes Twitter can follow in her social media company’s footsteps.

Published

on

Screenshot of Amy Peikoff

WASHINGTON, May 16, 2022 – A representative from a growing conservative social media platform said last week that she hopes Twitter, under new leadership, will emerge as a “sustainable” platform for free speech.

Amy Peikoff, chief policy officer of social media platform Parler, said as much during a Broadband Breakfast Live Online event Wednesday, in which she wondered about the implications of platforms banning accounts for views deemed controversial.

The social media world has been captivated by the lingering possibility that SpaceX and Tesla CEO Elon Musk could buy Twitter, which the billionaire has criticized for making decisions he said infringe on free speech.

Before Musk’s decision to go in on the company, Parler saw a surge in member sign-ups after former President Donald Trump was banned from Twitter for comments he made that the platform saw as encouraging the Capitol riots on January 6, 2021, a move Peikoff criticized. (Trump also criticized the move.)

Peikoff said she believes Twitter should be a free speech platform just like Parler and hopes for “sustainable” change with Musk’s promise.

“At Parler, we expect you to think for yourself and curate your own feed,” Peikoff told Broadband Breakfast Editor and Publisher Drew Clark. “The difference between Twitter and Parler is that on Parler the content is controlled by individuals; Twitter takes it upon itself to moderate by itself.”

She recommended “tools in the hands of the individual users to reward productive discourse and exercise freedom of association.”

Peikoff criticized Twitter for permanently banning Donald Trump following the insurrection at the U.S. Capitol on January 6, and recounted the struggle Parler had in obtaining access to hosting services on AWS, Amazon’s web services platform.

Screenshot of Amy Peikoff

While she defended the role of Section 230 of the Telecom Act for Parler and others, Peikoff criticized what she described as Twitter’s collusion with the government. Section 230 provides immunity from civil suits for comments posted by others on a social media network.

For example, Peikoff cited a July 2021 statement by former White House Press Secretary Jen Psaki raising concerns with “misinformation” on social media. When Twitter takes action to stifle anti-vaccination speech at the behest of the White House, that crosses the line into a form of censorship by social media giants that is, in effect, a form of “state action.”

Conservatives censored by Twitter or other social media networks that are undertaking such “state action” are wrongfully being deprived of their First Amendment rights, she said.

“I would not like to see more of this entanglement of government and platforms going forward,” she said Peikoff and instead to “leave human beings free to information and speech.”

Screenshot of Drew Clark and Amy Peikoff during Wednesday’s Broadband Breakfast’s Online Event

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, May 11, 2022, 12 Noon ET – Mr. Musk Goes to Washington: Will Twitter’s New Owner Change the Debate About Social Media?

The acquisition of social media powerhouse Twitter by Elon Musk, the world’s richest man, raises a host of issues about social media, free speech, and the power of persuasion in our digital age. Twitter already serves as the world’s de facto public square. But it hasn’t been without controversy, including the platform’s decision to ban former President Donald Trump in the wake of his tweets during the January 6 attack on the U.S. Capitol. Under new management, will Twitter become more hospitable to Trump and his allies? Does Twitter have a free speech problem? How will Mr. Musk’s acquisition change the debate about social media and Section 230 of the Telecommunications Act?

Guests for this Broadband Breakfast for Lunch session:

  • Amy Peikoff, Chief Policy Officer, Parler
  • Drew Clark (host), Editor and Publisher, Broadband Breakfast

Amy Peikoff is the Chief Policy Officer of Parler. After completing her Ph.D., she taught at universities (University of Texas, Austin, University of North Carolina, Chapel Hill, United States Air Force Academy) and law schools (Chapman, Southwestern), publishing frequently cited academic articles on privacy law, as well as op-eds in leading newspapers across the country on a range of issues. Just prior to joining Parler, she founded and was President of the Center for the Legalization of Privacy, which submitted an amicus brief in United States v. Facebook in 2019.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

Illustration by Mohamed Hassan used with permission

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

https://pixabay.com/vectors/elon-musk-twitter-owner-investor-7159200/

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending