Connect with us

Social Media

Protesting Twitter’s ‘Normalizing Racism,’ Activists Call on Social Network to Ban White Supremacists

Published

on

WASHINGTON, August 7, 2019 — As the second anniversary of the Unite the Right rally approaches, activists are calling for Twitter to ban key advocates of white supremacy from its platform.

David Duke, Richard Spencer, and other key organizers of the alt-right rally—which left counter-protester Heather Heyer dead after a white supremacist deliberately rammed his car into a crowd—still have access to their Twitter accounts. That allows them to spread their ideologies to tens of thousands of followers.

Change the Terms, a coalition of more than 50 human, civil, and digital rights groups, on Wednesday petitioned Twitter in a conference call and press release to ban these and other controversial speakers from their platform—and expand their content moderation policies.

“The deadly Unite the Right rally was planned on social media, and our community is still feeling the profound impact of that violence today,” said Don Gathers, co-founder of the Charlottesville chapter of Black Lives Matter. “It’s time these companies use their terms of service to keep white supremacists off Twitter and reduce the hate that leads to tragedy.”

The coalition has put together a set of recommended policies for corporations to adopt to address dangerous hate speech on their platforms, which Change the Terms defines as “activities that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.”

The definition was written in an attempt to mirror language from existing hate crime laws, in which courts have said that particular types of speech are not protected under the First Amendment.

While Facebook and YouTube have taken steps to remove white supremacy from their platforms, Twitter has yet to do so.

In response to criticism, Twitter has repeatedly referred to its existing content policy, which prohibits users from threatening or glorifying violence, targeted harassment, and hateful conduct. But critics argue that these policies are not being enforced and that they should be more comprehensive.

Critics argue, however, that these policies are not being enforced and that Twitter’s approach needs to be more comprehensive.

“When Twitter gives well-known white supremacists a platform, even after they have been deemed too extreme by Facebook and YouTube, their company becomes complicit in normalizing racism and the hateful acts inspired by it,” said Jessica González, vice president of strategy at Free Press.

“How white supremacy has become normalized directly connects to Twitter,” said Lisa Woolfork, a professor at the University of Virginia. “Extreme discourse has become not-so-extreme anymore. We are anesthetized to its toxicity.”

Fringe platforms such as 8chan may be magnets for anti-immigrant and anti-Semitic ideologies, but Twitter is where these ideas become mainstream, said MediaJustice Co-Director Steven Renderos. He added that American culture is increasingly being shaped by social media.

White supremacy is not a new ideology, Woolfork said. But the ease with which its proponents can spread their ideas to a global audience is unprecedented. Twitter’s current policies amplify the harms of white supremacy, she continued, and as a leading global communications platform, it has a responsibility to consider this harm and take action to stop it.

Among other white nationalists enjoying access to wide-reaching audiences on the platform is conspiracy theorist Renaud Camus, whose anti-immigrant writings were cited by the gunmen who attacked in El Paso and Christchurch, New Zealand as an inspiration. Camus still uses Twitter to defending his thinking.

“From Charlottesville two years ago to El Paso this week, we’ve seen the tragic outcomes of white nationalism spreading on Twitter, made even more dangerous every time Trump is allowed to tweet his bigoted rhetoric,” said Brandi Collins-Dexter, senior campaign director for Color Of Change.

While Change the Terms is not explicitly calling for Trump’s account to be banned, MediaJustice Co-Director Steven Renderos emphasized the importance of platforms proactively enforcing their content policies across all accounts, even those belonging to prominent politicians.

White nationalists are taking advantage of online platforms like Twitter to harass marginalized communities, build power and organizational strength, and amplify violent ideologies, said Collins-Dexter, calling for Twitter’s leadership to “get over their fear of conservative backlash and fully stamp out discrimination on the platform.”

(Photo of Brandi Collins-Dexter of Color of Change by New America, used with permission.)

Section 230

Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230

The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.

Published

on

Rep. Anna Eshoo, D-California

WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.

The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.

The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.

Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.

Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.

“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”

The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

From Haugen testimony to legislation

Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.

“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.

Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.

Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.

Haugen’s testimony versus Zuckerberg’s Section 230 vision

In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”

This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.

Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”

Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.

Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.

But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.

An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.

Where the parties agree

Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.

Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.

Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.

With reporting from Riley Steward

Continue Reading

Social Media

Congress Must Force Facebook to Make Internal Research Public, Whistleblower Testifies

Frances Haugen testifies in front of the Senate studying protecting kids online after revealing herself as Facebook whistleblower.

Published

on

Facebook whistleblower Frances Haugen testifies in front of Senate committee on October 5.

WASHINGTON, October 5, 2021 – The former Facebook employee who outed herself as the whistleblower who leaked documents to the Wall Street Journal that showed Facebook knew its photo-sharing app Instagram contributed to harming the mental health of kids told a Senate committee that the company’s alleged profit-driven motives means the company’s internal research cannot be kept behind closed doors.

Frances Haugen testified Tuesday in front of the Senate Subcommittee on Consumer Protection, Product Safety and Data Security, which is looking into protecting kids online, after identifying herself Sunday on the television program 60 Minutes as the person who gave the Journal and the Securities and Exchange Commission documents showing the company going forward with development of a kids version of Instagram despite knowing the mental health impact its apps have on that demographic. (Facebook has since halted development of the kids app after the Journal story and lawmakers asking for it to be suspended.)

“We should not expect Facebook to change. We need action from Congress,” Haugen said Tuesday.

That action, she recommended, includes forcing Facebook to make all future internal research fully public because the company cannot be trusted to act on its own commissioned work.

Haugen noted that the reason the company did not — and does not — take such action, which could include preemptively shutting down development of its Instagram for kids product, is because the company is allegedly driven by a profit-first model.

“Facebook repeatedly encountered conflicts between its own profits and our safety. Facebook consistently resolved those conflicts in favor of its own profits,” alleged Haugen, who now considers herself an advocate for public oversight of social media.

“The result has been a system that amplifies division, extremism, and polarization — and undermining societies around the world. In some cases, this dangerous online talk has led to actual violence that harms and even kills people. In other cases, their profit optimizing machine is generating self-harm and self-hate — especially for vulnerable groups, like teenage girls. These problems have been confirmed repeatedly by Facebook’s own internal research.”

Despite calls to modify Section 230 of the Communications Decency Act, which shields large tech platforms from legal liability for what their users post, Haugen said that – and tweaks to its outdated privacy protections – won’t be enough.

Facebook has for months touted it removes millions of groups and accounts that violate its community guidelines on hate speech and inciting violence. But Haugen alleges that despite the claims that it actively makes its platforms safer, in actuality, it only takes down three to five percent of those threats.

Asked by Senator Ben Ray Lujan, D-New Mexico, if Facebook “ever found a feature on its platform harmed its users, but the feature moved forward because it would also grow users or increase revenue,” Frances said yes, alleging the company prioritized ease of resharing over the feature’s susceptibility to growing “hate speech, misinformation or violence incitement,” even though the feature would only “decrease growth a tiny, little amount.”

She also alleged that those directions came from the head of the company himself, Mark Zuckerberg, who allegedly chose arbitrary or vague “metrics defined by Facebook, like meaningful social interactions over changes that would have significantly decreased misinformation, hate speech and other inciting content.”

Facebook’s troubles, up to this point

Facebook has already been the target of Washington’s ire for months now. It has been cited as an alleged enabler of the January 6 Capitol Hill riot that sought to stop the transition to a Joe Biden presidency, despite the platform banning former president Donald Trump. Its platform had also been blamed for allowing the spread of information that has led to violence in parts of the world, including genocide in Myanmar.

The platform has already been accused of suppressing stories from progressive news outlets and censors information that conflicts with its own personal interest, and that its algorithms deliver the same kinds of information to people so they are not exposed to different viewpoints, as a number of public interest groups have claimed.

In 2018, Facebook made worldwide news after reports in the Guardian and the New York Times found nearly 100 million Facebook profiles were harvested by a company called Cambridge Analytica, which used the data to build profiles of people to provide them with material that made them sway in a political direction.

Federal regulators have already been looking to deal with Facebook and other Big Tech companies, as that has one clear agenda item of the Biden administration. The White House has already perched Amazon critic Lina Khan as the head of the Federal Trade Commission, which has recently filed a monopoly complaint against the company in court, and other figures, including Google critic Jonathan Kanter to the Department of Justice’s antitrust division.

Facebook’s week has gone from bad to worse. Haugen, a former Facebook product manager and Harvard MBA graduate, testified in a hearing titled “Protecting Kids Online” before the Subcommittee on Consumer Protection, Product Safety, and Data Security Hearing on Tuesday. Previous opposition to Facebook’s plans to expand its products to minors has come from external parties like public interest groups and Congress.

Continue Reading

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending