Connect with us

Social Media

Protesting Twitter’s ‘Normalizing Racism,’ Activists Call on Social Network to Ban White Supremacists

Published

on

WASHINGTON, August 7, 2019 — As the second anniversary of the Unite the Right rally approaches, activists are calling for Twitter to ban key advocates of white supremacy from its platform.

David Duke, Richard Spencer, and other key organizers of the alt-right rally—which left counter-protester Heather Heyer dead after a white supremacist deliberately rammed his car into a crowd—still have access to their Twitter accounts. That allows them to spread their ideologies to tens of thousands of followers.

Change the Terms, a coalition of more than 50 human, civil, and digital rights groups, on Wednesday petitioned Twitter in a conference call and press release to ban these and other controversial speakers from their platform—and expand their content moderation policies.

“The deadly Unite the Right rally was planned on social media, and our community is still feeling the profound impact of that violence today,” said Don Gathers, co-founder of the Charlottesville chapter of Black Lives Matter. “It’s time these companies use their terms of service to keep white supremacists off Twitter and reduce the hate that leads to tragedy.”

The coalition has put together a set of recommended policies for corporations to adopt to address dangerous hate speech on their platforms, which Change the Terms defines as “activities that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.”

The definition was written in an attempt to mirror language from existing hate crime laws, in which courts have said that particular types of speech are not protected under the First Amendment.

While Facebook and YouTube have taken steps to remove white supremacy from their platforms, Twitter has yet to do so.

In response to criticism, Twitter has repeatedly referred to its existing content policy, which prohibits users from threatening or glorifying violence, targeted harassment, and hateful conduct. But critics argue that these policies are not being enforced and that they should be more comprehensive.

Critics argue, however, that these policies are not being enforced and that Twitter’s approach needs to be more comprehensive.

“When Twitter gives well-known white supremacists a platform, even after they have been deemed too extreme by Facebook and YouTube, their company becomes complicit in normalizing racism and the hateful acts inspired by it,” said Jessica González, vice president of strategy at Free Press.

“How white supremacy has become normalized directly connects to Twitter,” said Lisa Woolfork, a professor at the University of Virginia. “Extreme discourse has become not-so-extreme anymore. We are anesthetized to its toxicity.”

Fringe platforms such as 8chan may be magnets for anti-immigrant and anti-Semitic ideologies, but Twitter is where these ideas become mainstream, said MediaJustice Co-Director Steven Renderos. He added that American culture is increasingly being shaped by social media.

White supremacy is not a new ideology, Woolfork said. But the ease with which its proponents can spread their ideas to a global audience is unprecedented. Twitter’s current policies amplify the harms of white supremacy, she continued, and as a leading global communications platform, it has a responsibility to consider this harm and take action to stop it.

Among other white nationalists enjoying access to wide-reaching audiences on the platform is conspiracy theorist Renaud Camus, whose anti-immigrant writings were cited by the gunmen who attacked in El Paso and Christchurch, New Zealand as an inspiration. Camus still uses Twitter to defending his thinking.

“From Charlottesville two years ago to El Paso this week, we’ve seen the tragic outcomes of white nationalism spreading on Twitter, made even more dangerous every time Trump is allowed to tweet his bigoted rhetoric,” said Brandi Collins-Dexter, senior campaign director for Color Of Change.

While Change the Terms is not explicitly calling for Trump’s account to be banned, MediaJustice Co-Director Steven Renderos emphasized the importance of platforms proactively enforcing their content policies across all accounts, even those belonging to prominent politicians.

White nationalists are taking advantage of online platforms like Twitter to harass marginalized communities, build power and organizational strength, and amplify violent ideologies, said Collins-Dexter, calling for Twitter’s leadership to “get over their fear of conservative backlash and fully stamp out discrimination on the platform.”

(Photo of Brandi Collins-Dexter of Color of Change by New America, used with permission.)

Free Speech

Panel Hears Opposing Views on Content Moderation Debate

Some agreed there is egregious information that should be downranked on search platforms.

Published

on

Screenshot of Renee DiResta, research manager at Stanford Internet Observatory.

WASHINGTON, September 14, 2022 – Panelists wrangled over how technology platforms should handle content moderation at an event hosted by the Lincoln Network Friday, with one arguing that search engines should neutralize misinformation that cause direct, “tangible” harms and another advocating an online content moderation standard that doesn’t discriminate on viewpoints.

Debate about what to do with certain content on technology platforms has picked up steam since former President Donald Trump was removed last year from platforms including Facebook and Twitter for allegedly inciting the January 6, 2021, storming of the Capitol.

Search engines generally moderate content algorithmically, prioritizing certain results over others. Most engines, like Google, prioritize results from institutions generally considered to be credible, such as universities and government agencies.

That can be a good thing, said Renee DiResta, research manager at Stanford Internet Observatory. If search engines allow scams or medical misinformation to headline search results, she argued, “tangible” material or physical harms will result.

The internet pioneered communications from “one-to-many” broadcast media – e.g., television and radio – to a “many-to-many” model, said DiResta. She argued that “many-to-many” interactions create social frictions and make possible the formation of social media mobs.

At the beginning of the year, Georgia Republic representative Marjorie Taylor Greene was permanently removed from Twitter for allegedly spreading Covid-19 misinformation, the same reason Kentucky Senator Rand Paul was removed from Alphabet Inc.’s YouTube.

Lincoln Network senior fellow Antonio Martinez endorsed a more permissive content moderation strategy that – excluding content that incites imminent, lawless action – is tolerant of heterodox speech. “To think that we can epistemologically or even technically go in and establish capital-T Truth at scale is impossible,” he said.

Trump has said to be committed to a platform of open speech with the creation of his social media website Truth Social. Other platforms, such as social media site Parler and video-sharing website Rumble, have purported to allow more speech than the incumbents. SpaceX CEO Elon Musk previously committed to buying Twitter because of its policies prohibiting certain speech, though he now wants out of that commitment.

Alex Feerst, CEO of digital content curator Murmuration Labs, said that free-speech aphorisms – such as, “The cure for bad speech is more speech” – may no longer hold true given the volume of speech enabled by the internet.

Continue Reading

Social Media

Americans Should Look to Filtration Software to Block Harmful Content from View, Event Hears

One professor said it is the only way to solve the harmful content problem without encroaching on free speech rights.

Published

on

Photo of Adam Neufeld of Anti-Defamation League, Steve Delbianco of NetChoice, Barak Richman of Duke University, Shannon McGregor of University of North Carolina (left to right)

WASHINGTON, July 21, 2022 – Researchers at an Internet Governance Forum event Thursday recommended the use of third-party software that filters out harmful content on the internet, in an effort to combat what they say are social media algorithms that feed them content they don’t want to see.

Users of social media sites often don’t know what algorithms are filtering the information they consume, said Steve DelBianco, CEO of NetChoice, a trade association that represents the technology industry. Most algorithms function to maximize user engagement by manipulating their emotions, which is particularly worrisome, he said.

But third-party software, such as Sightengine and Amazon’s Rekognition – which moderate what users see by bypassing images and videos that the user selects as objectionable – could act in place of other solutions to tackle disinformation and hate speech, said Barak Richman, professor of law and business at Duke University.

Richman argued that this “middleware technology” is the only way to solve this universal problem without encroaching on free speech rights. He suggested Americans in these technologies – that would be supported by popular platforms including Facebook, Google, and TikTok – to create the buffer between harmful algorithms and the user.

Such technologies already exist in limited applications that offer less personalization and accuracy in filtering, said Richman. But the market demand needs to increase to support innovation and expansion in this area.

Americans across party lines believe that there is a problem with disinformation and hate speech, but disagree on the solution, added fellow panelist Shannon McGregor, senior researcher at the Center for Information, Technology, and Public Life at the University of North Carolina.

The conversation comes as debate continues regarding Section 230, a provision in the Communications Decency Act that protects technology platforms from being liable for content their users post. Some say Section 230 only protects “neutral platforms,” while others claim it allows powerful companies to ignore user harm. Experts in the space disagree on the responsibility of tech companies to moderate content on their platforms.

Continue Reading

Free Speech

Experts Reflect on Supreme Court Decision to Block Texas Social Media Bill

Observers on a Broadband Breakfast panel offered differing perspectives on the high court’s decision.

Published

on

Parler CPO Amy Peikoff

WASHINGTON, June 2, 2022 – Experts hosted by Broadband Breakfast Wednesday were split on what to make of  the Supreme Court’s 5-4 decision to reverse a lower court order lifting a ban on a Texas social media law that would have made it illegal for certain large platforms to crack down on speech they deem reprehensible.

The decision keeps the law from taking affect until a full determination is made by a lower court.

Parler CPO Amy Peikoff

During a Broadband Live Online event on Wednesday, Ari Cohn, free speech counsel for tech lobbyist TechFreedom, argued that the bill “undermines the First Amendment to protect the values of free speech.

“We have seen time and again over the course of history that when you give the government power to start encroaching on editorial decisions [it will] never go away, it will only grow stronger,” he cautioned. “It will inevitably be abused by whoever is in power.”

Nora Benavidez, senior counsel and director of digital justice and civil rights for advocate Free Press, agreed with Cohn. “This is a state effort to control what private entities do,” she said Wednesday. “That is unconstitutional.

“When government attempts to invade into private action that is deeply problematic,” Benavidez continued. “We can see hundreds and hundreds of years of examples of where various countries have inserted themselves into private actions – that leads to authoritarianism, that leads to censorship.”

Different perspectives

Principal at McCollough Law Firm Scott McCollough said Wednesday  that he believed the law should have been allowed to stand.

“I agree the government should not be picking and choosing who gets to speak and who does not,” he said. “The intent behind the Texas statute was to prevent anyone from being censored – regardless of viewpoint, no matter what [the viewpoint] is.”

McCollough argued that this case was about which free speech values supersede the other – “those of the platforms, or those of the people who feel that they are being shut out from what is today the public square.

“In the end it will be a court that acts, and the court is also the state,” McCollough added. “So, in that respect, the state would still be weighing in on who wins and who loses – who gets to speak and who does not.”

Chief policy officer of social media platform Parler Amy Peikoff said Wednesday that her primary concern was “viewpoint discrimination in favor of the ruling elite.”

Peikoff was particularly concerned about coordination between state agencies and social media platforms to “squelch certain viewpoints.”

Peikoff clarified that she did not believe that the Texas law was the best vehicle to address these concerns, however, stating instead that lawsuits – preferably private ones – be used to remove the “censorious cancer,” rather than entangling a government entity in the matter.

“This cancer grows out of a partnership between government and social media to squelch discussion about certain viewpoints and perspectives.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, June 1, 2022, 12 Noon ET – BREAKING NEWS EVENT! – The Supreme Court, Social Media and the Culture Wars

The Supreme Court on Tuesday blocked a Texas law that would ban large social media companies from removing posts based on the views they express. Join us for this breaking news event of Broadband Breakfast Live Online in which we discuss the Supreme Court, social media and the culture wars.

Panelists:

  • Scott McCollough, Attorney, McCollough Law Firm
  • Amy Peikoff, Chief Policy Officer, Parler
  • Ari Cohn, Free Speech Counsel, TechFreedom
  • Nora Benavidez, Senior Counsel and Director of Digital Justice and Civil Rights at Free Press
  • Drew Clark (presenter and host), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Amy Peikoff is the Chief Policy Officer of Parler. After completing her Ph.D., she taught at universities (University of Texas, Austin, University of North Carolina, Chapel Hill, United States Air Force Academy) and law schools (Chapman, Southwestern), publishing frequently cited academic articles on privacy law, as well as op-eds in leading newspapers across the country on a range of issues. Just prior to joining Parler, she founded and was President of the Center for the Legalization of Privacy, which submitted an amicus brief in United States v. Facebook in 2019.

Ari Cohn is Free Speech Counsel at TechFreedom. A nationally recognized expert in First Amendment law, he was previously the Director of the Individual Rights Defense Program at the Foundation for Individual Rights in Education (FIRE), and has worked in private practice at Mayer Brown LLP and as a solo practitioner, and was an attorney with the U.S. Department of Education’s Office for Civil Rights. Ari graduated cum laude from Cornell Law School, and earned his Bachelor of Arts degree from the University of Illinois at Urbana-Champaign.

Nora Benavidez manages Free Press’s efforts around platform and media accountability to defend against digital threats to democracy. She previously served as the director of PEN America’s U.S. Free Expression Programs, where she guided the organization’s national advocacy agenda on First Amendment and free-expression issues, including press freedom, disinformation defense and protest rights. Nora launched and led PEN America’s media-literacy and disinformation-defense program. She also led the organization’s groundbreaking First Amendment lawsuit, PEN America v. Donald Trump, to hold the former president accountable for his retaliation against and censorship of journalists he disliked.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

Photo of the Supreme Court from September 2020 by Aiva.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending