Connect with us

Social Media

Seeking to Quell ‘Evil Contagion’ of ‘White Supremacy,’ President Trump May Ignite New Battle Over Online Hate Speech

Published

on

WASHINGTON, August 5, 2019 — President Donald Trump on Monday morning attempted to strike a tone of unity by denouncing the white, anti-Hispanic man who “shot and murdered 20 people, and injured 26 others, including precious little children.”

In speaking about the two significant mass shootings over the weekend in Texas and Ohio, Trump delivered prepared remarks in which he specifically denounced “racism, bigotry, and white supremacy,” and linked it to the “warp[ed] mind” of the racially-motivated El Paso killer.

That shooter – now in custody – posted a manifesto online before the shooting in which he said he was responding to the “Hispanic invasion of Texas.” The shooter cited the March 15, massacre of two mosques in Christchurch, New Zealand, as an inspiration for his action.

In White House remarks with Vice President Mike Pence standing at his side, Trump proposed solutions to “stop this evil contagion.” Trump denounced “hate” or “racist hate” four times.

Trump’s first proposed solution: “I am directing the Department of Justice to work in partnership with local, state, and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike.”

That proposal appeared to be an initiative that was either targeted at – or potentially an opportunity for collaboration with – social media giants like Twitter, Facebook and Google.

Indeed, Trump and others on the political right have repeatedly criticized these social media giants for bias against Trump and Republicans.

Sometimes, this right-wing criticism of Twitter emerges after a user is banned for violating the social media company’s terms of service against “hate speech.”

In Trump’s remarks, he also warned that “we must shine light on the dark recesses of the internet.” Indeed, Trump said that “the perils of the internet and social media cannot be ignored, and they will not be ignored.”

But it must be equally clear to the White House that the El Paso killer – in his online manifesto – used anti-Hispanic and anti-immigrant rhetoric very similar to Trump’s own repeated words about an “invasion” of Mexican and other Latin Americans at the United States border.

Hence this mass murder contains elements of political peril for both Donald Trump and for his frequent rivals at social media companies like Twitter, Facebook and Google.

8chan gets taken down by its network provider

Minutes before the El Paso attack at a Wal-Mart, a manifesto titled “The Inconvenient Truth” was posted to the online platform 8chan, claiming that the shooting was in response to the “Hispanic invasion.” The killer specifically cited the Christchurch shooter’s white supremacist manifesto as an inspiration.

As previously utilized by Islamic terrorists, social media platforms are increasingly being utilized by white supremacist terrorists. In addition to posting his manifesto online, the Christchurch shooter livestreamed his attack on Facebook.

In April, a man posted an anti-Semitic and white nationalist letter to the same online forum, 8chan, before opening fire at a synagogue near San Diego, California.

And on July 28, the gunman who killed three people at a garlic festival in Gilroy, California, allegedly promoted a misogynist white supremacist book on Instagram just prior to his attack.

But Saturday’s El Paso shooting motivated some companies to act. Cloudflare, 8chan’s network provider early on Monday morning pulled its support for 8chan, calling the platform a “cesspool of hate.”

“While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online,” wrote Cloudflare CEO Matthew Prince.

“It does nothing to address why mass shootings occur,” said Prince. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we’ve solved our own problem, but we haven’t solved the internet’s.”

Prince continued to voice his discomfort about the company taking the role of content arbitrator, and pointed to Europe’s attempts to have more government involvement.

The Christchurch massacre opened a dialogue between big tech and European critics of ‘hate speech’

Following the Christchurch attack, 18 governments in May signed the Christchurch Call pledge (PDF) seeking to stop the internet from being used as a tool by violent extremists. The U.S. did not sign on, and the White House voiced concerns that the document would violate the First Amendment.

Dubbed “The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online,” the May document included commitments by both online service providers, and by governments.

Among other measures, the online providers were to “[t]ake transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media.”

Governments were to “[e]nsure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content.”

Although Silicon Valley has had a reputation for supporting a libertarian view of free speech, the increasingly unruly world of social media over the past decade has put that First Amendment absolutism to the test.

Indeed, five big tech giants – Google, Amazon, Facebook, Twitter and Microsoft – voiced their support from the Christchurch Call on the day of its release.

In particular, they took responsibility for the apparent restrictions on freedom of speech that the Christchurch Call would impose, saying that the massacre was “a horrifying tragedy” that made it “right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence.”

In particular, they noted that the Christchurch Call expands on the Global Internet Forum to Counter Terrorism set up by Facebook, Google’s YouTube, Microsoft and Twitter in the summer of 2017.

The objective of this organization is focused on disrupting terrorists’ ability to promote terrorism, disseminate violent propaganda, and exploit or glorify real-world acts of violence.

The tech giants said (PDF) that they were sharing more information about how they could “detect and remove this content from our services, updates to our individual terms of use, and more transparency for content policies and removals.”

Will Trump politicize the concept of ‘hate speech’ that tech companies are uniting with Europe to take down?

In his Monday statement commenting on an ostensible partnership between the Justice Department and the social media companies, Trump referred to the need to the need to “detect mass shooters before they strike.”

And he had this specific example: “As an example, the monster in the Parkland high school in Florida had many red flags against him, and yet nobody took decisive action. Nobody did anything. Why not?”

Part of the challenge now faced by social media companies is frankly political. Although Twitter has taken aggressive steps to eradicate ISIS content from its platform, it has not applied the same tools and algorithms to take down white supremacist content.

Society accepts the risk of inconveniencing potentially related accounts, such as those of Arabic language broadcasters for the benefit of banning ISIS content, Motherboard summarized earlier this year based its interview with Twitter employees.

But if these same aggressive tactics were deployed against white nationalist terrorism, the algorithms would likely flag content from prominent Republican politicians, far-right commentators – and Donald Trump himself, these employees said.

Indeed, right after declining to sign the Christchurch call, the White House escalated its war against American social media by announcing a campaign asking internet users to share stories of when they felt censored by Facebook, Twitter and Google’s YouTube.

And in June, Twitter made it clear that they were speaking directly about Tweets that violated their terms of service by prominent public officials, including the president.

“In the past, we’ve allowed certain Tweets that violated our rules to remain on Twitter because they were in the public’s interest, but it wasn’t clear when and how we made those determinations,” a Twitter official said. “To fix that, we’re introducing a new notice that will provide additional clarity in these situations, and sharing more on when and why we’ll use it.”

White House officials did not immediately respond to whether the Trump administration was reconsidering its opposition to the Christchurch Call.

Will Trump’s speech put others in the spotlight, or keep it on him and his rhetoric?

In additional to highlighting the anticipated effort with social media, Trump had four additional suggested “bipartisan solutions” to the “evil contagion” caused by the Texas and Ohio mass shootings.

They including “stop[ing] the glorification of violence in our society” in video games, addressing mental health laws “to better identify mentally disturbed individuals,” keeping firearms from those “judged to pose a grave risk to public safety,” and seeking the death penalty against those who commit hate crimes and mass murders.

Trump’s advisers said that they hoped the speech would stem the tide of media attention being given to the links between his frequent use of dehumanizing language to describe Latin American immigrants.

As he delivered his prepared remarks from a TelePrompTer in a halting cadence, Trump appeared to be reading the speech for the first time. This led to an awkward moment when he suggested that the second shooting of the weekend – which had taken place outside a Dayton, Ohio bar – had been in Toledo, Ohio.

But despite displaying the visible discomfiture that is evident when he reads prepared remarks to the White House press pool cameras, Trump made an attempt to silence critics like former El Paso Congressman Beto O’Rourke – who just hours before had explicitly called the President a white nationalist – by calling for defeat of “sinister ideologies” of hate.

“In one voice, our nation must condemn racism, bigotry, and white supremacy,” Trump said. “Hate has no place in America. Hatred warps the mind, ravages the heart, and devours the soul.”

Trump did not elaborate on the hate-based motivations of the El Paso shooter. Rather than reflect on where the El Paso shooter may have gotten the idea that Hispanics were “invading” the United States, Trump cast blame on one of the targets often invoked by conservatives after such mass shootings, including video games.

Although Trump has previously delivered remarks in the aftermath of violent acts committed by white supremacists and white nationalists during his presidency, Monday’s speech marked the first time that the President had chosen to specifically condemn “white supremacy,” rather than deliver a more general condemnation of “hate.”

In his rhetoric, both on his Twitter account and on the campaign trail, Trump uses non-whites as a foil, beginning with his 2015 campaign announcement speech, in which he described Mexican immigrants as “rapists” who bring crime and drugs to America.

That rhetoric reappeared in the 2018 Congressional elections as Trump spoke about an “invasion” from South and Central America taking up a significant portion of his rally stump speech.

As the 2020 election draws nearer, Trump’s strategy this campaign seems to similarly demonize racial minorities and prominent Democrats of color, most recently Rep. Elijah Cummings, D-Md., the chairman of the House Oversight Committee.

Trump critics not appeased by his Monday speech

As a result, commentators said Monday’s condemnation of white supremacy marked a 180-degree turn for the President. But his performance did not leave many observers convinced of his sincerity.

House Homeland Security Committee Chairman Bennie Thompson, D-Miss., called the President’s speech “meaningless.”

“We know tragedy after tragedy his words have not led to solid action or any change in rhetoric. We know his vile and racist words have incited violence and attacks on Americans,” he said in a statement. “Now dozens are dead and white supremacist terrorism is on the rise and is now our top domestic terrorism threat.”

Sen. Ron Wyden, D-Ore., tweeted that Trump had “addressed the blaze today with the equivalent of a water balloon” after “fanning the flames of white supremacy for two-and-a-half years in the White House.”

Ohio Democratic Party Chairman David Pepper said Trump’s condemnation of white supremacy in Monday’s remarks could not make up for his years of racist campaign rhetoric.

“Through years of campaigning and hate rallies, to now say ‘I’m against hateful people and racism,’ is just hard to listen to,” Pepper said during a phone interview.

“Unless he’s willing to say ‘I know I’ve been a part of it’ with a full apology and some self recognition, it felt like he was just checking the boxes.”

Pepper suggested that Trump “was saying what someone told him to say,” and predicted that Trump would soon walk back his remarks, much as he did after the 2017 “Unite the Right” white supremacist rally in Virginia.

Charlie Sykes, a former conservative talk radio host and editor of “The Bulwark,” echoed Pepper’s sentiments in a separate phone interview, but also called out Trump for failing to speak of the El Paso shooter’s motivations.

“It was so perfunctory and inadequate because he condemned the words ‘bigotry and racism,’ but he didn’t describe what he was talking about,” Sykes said.

Sykes criticized Trump for failing to take responsibility for his routine use of racist rhetoric, including descriptions of immigrants as “invaders” who “infest” the United States.

“Unless you’re willing to discuss the dehumanization behind the crimes, the invocation of certain words doesn’t change anything.”

Another longtime GOP figure who Trump failed to impress was veteran strategist Rick Wilson, who cited it as yet the latest example of “the delta between Trump on the TelePrompTer and Trump at a rally,” a difference he described as “enormous.”

“Nothing about that speech had a ring of authenticity to it,” said Rick Wilson, a legendary GOP ad maker and the author of “Everything Trump Touches Dies.”

“The contrast between the speechwriter’s handiwork and the real Donald Trump…is rather marked,” he said.

Where does online free speech – and allegations of ‘hate crimes’ – go from here?

Although the social media companies are making more efforts to harness and expunge online hate, they are unlikely to be able to get very far without someone – perhaps even President Trump – crying foul.

Putting the politics of online hate speech aside, the U.S. does take a fundamentally different approach to freedom of expression than does Europe.

According to the Human Rights Watch, hundreds of French citizens are convicted for “apologies for terrorism” each year, which includes any positive comment about a terrorist or terrorist organization. Online offenses are treated especially harshly.

By contrast, the U.S. has a fundamental commitment to the freedom of speech—including speech that is indecent, offensive, and hateful.

The Supreme Court has ruled that speech is unprotected when it is “directed to inciting or producing imminent lawless action” and is “likely to incite or produce such action.”

But this exception is extremely narrow—in Brandenburg v. Ohio, the Court reversed the conviction of a KKK group that advocated for violence as a means of political reform, arguing that their statements did not express an immediate intent to do violence.

The limitations on government leave the responsibility of combating online extremism to the digital platforms themselves, said Open Technology Institute Director Sarah Morris at a panel last month.

“In general, private companies have a lot more flexibility in how they respond to terrorist propaganda than Congress does,” said Emma Llansó, Director of the Free Expression Project at the Center for Democracy & Technology. “They need to be clear about what their policies are and enforce them transparently.”

But companies also need to carefully consider how they will respond to pressure from governments and individuals around the world, said Llansó, adding that “no content policy or community guideline is ever applied just in the circumstances it was designed for.”

“As the experience of social media companies has shown us, content moderation is extremely difficult to do well,” Llansó concluded. “It requires an understanding of the context that the speaker and the audience are operating in, which a technical infrastructure provider is not likely to have.”

(Managing Editor Andrew Feinberg and Reporter Emily McPhie contributed reporting to this article. Photo of Vice President Pence beside Trump speaking on August 5, 2019, from the White House.)

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney at The CommLaw Group. He has closely tracked the trends in and mechanics of digital infrastructure for 20 years, and has helped fiber-based and fixed wireless providers navigate coverage, identify markets, broker infrastructure, and operate in the public right of way. The articles and posts on Broadband Breakfast and affiliated social media, including the BroadbandCensus Twitter feed, are not legal advice or legal services, do not constitute the creation of an attorney-client privilege, and represent the views of their respective authors.

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Social Media

Members of Congress Request Facebook Halt ‘Instagram For Kids’ Plan Following Mental Health Research Report

Letter follows Wall Street Journal story that reports Facebook knew about mental health damage Instagram has on teens.

Published

on

WASHINGTON, September 15, 2021 – Members of Congress have sent a letter Wednesday to Facebook CEO Mark Zuckerberg urging the company to stop its plan to launch a new platform for kids, following a report by the Wall Street Journal that cites company documents that reportedly shows the company knows its platforms harm the mental health of teens.

The letter, signed by Edward Markey, D-Massachusetts, Kathy Castor, D-Florida, and Lori Trahan, D-Massachusetts, also asks Facebook to provide answers by October 6 to questions including whether the company has, and who, reviewed the mental health research as cited in the Journal report; whether the company will agree to abandon plans to launch a new platform for children or teens; and when the company will begin studying its platforms’ impact on the kids’ mental health.

The letter also demands an update on the company’s plans for new products targeting children or teens, asks for copies of internal research regarding the mental health of this demographic, and copies of any external research the company has commissioned or accessed related to this matter.

The letter cites the Journal’s September 14 story, which reports that the company has spent the past three years conducting studies into how photo-sharing app Instagram, which Facebook owns, affects millions of young users, and found that the app is “harmful for a sizable percentage of them, most notably teenage girls.” The story uses the story of a teen who had to see a therapist due to an eating disorder due to exposure to images of other users’ bodies.

The story also cites a presentation that said teens were blaming Instagram for anxiety, depression, and the desire to kill themselves.

The head of Instagram, Adam Mosseri, told the Journal that research on mental health was valuable and that Facebook was late to realizing the drawback of connecting large swatch of people, according to the story. But he added that there’s “a lot of good that comes with what we do.”

Facebook told Congress it was planning ‘Instagram for kids’

Back in March, during a congressional hearing about Big Tech’s influence, Zuckerberg said Instagram was in the planning stages of building an “Instagram for kids.” Instagram itself does not allow kids under 13 to use the app.

On April 5, Markey, Castor and Trahan penned their names on another letter to Zuckerberg, which expressed concerns about the plan. “Children are a uniquely vulnerable population online, and images of kids are highly sensitive data,” the April letter said. “Facebook has an obligation to ensure that any new platforms or projects targeting children put those users’ welfare first, and we are skeptical that Facebook is prepared to fulfil this obligation.”

The plan was also met with opposition from the Campaign for a Commercial-Free Childhood, the Center for Humane Technology, Common Sense Media, and the Center for Digital Democracy, who said the app “preys on their fear of missing out as their ravenous desire for approval by peers exploits their developmental growth.

“The platform’s relentless focus on appearance, self-presentation, and branding presents challenges to adolescents’ privacy and well-being,” the opponents said. “Younger children are even less developmentally equipped to deal with these challenges, as they are learning to navigate social interactions, friendships, and their inner sense of strengths during this crucial window of development.”

At the March hearing, Zuckerberg, however, claimed that social apps to connect other people can have positive mental health benefits.

And then in August, Sens. Richard Blumenthal, D-Connecticut, and Marsha Blackburn, R-Tennessee, sent a letter to Zuckerberg asking for their research on mental health. Facebook responded without the company’s research, but said there are challenges with doing such research, the Journal said. “We are not aware of a consensus among studies or experts about how much screen time is ‘too much,’” according to the Journal, citing the response letter to the senators.

Continue Reading

China

Experts Raise Alarm About China’s App Data Aggregation Potential

The Communist government has access to a vast trove from Chinese-made apps.

Published

on

Former Commerce aide and professor at Texas A&M University, Margaret Peterlin

WASHINGTON, September 2, 2021 – Social media app TikTok’s rise as one of the world’s top downloaded software is concerning experts who say the aggregate data collected across a number of Chinese-made apps will allow the Communist government to get ahead of any federal action to stem the data flow.

In June, President Joe Biden signed an executive order that revoked a Trump direction that sought to ban TikTok and replaced it with criteria for the Commerce Department to evaluate the risks of said apps connected to foreign adversaries. The Trump administration even pressured TikTok to sell its U.S. business, but that never materialized.

On a webinar hosted by the Federalist Society on Thursday, panelists said the U.S. government may already be behind on the race to contain data collection and prevent it from getting into Chinese hands, who are creating advanced artificial intelligence using the data.

Margaret Peterlin, a lawyer, former Commerce Department aide and professor at the school of public service at Texas A&M University, said her concern with Biden’s executive order is whether it’s “strategically responsive” to what the Chinese government intends to do with all these sources of data – WeChat, TikTok, AliExpress, and its massive influence in telecommunications with Huawei and ZTE.

She noted that the Communist government has been very clear about its direction – that it wants to dominate and develop its data aggregation prowess to develop advanced artificial technologies. She illustrated this by using the example of how government uses advanced identification technologies and surveillance to monitor the Uyghur minority.

Peterlin also raised the issue of Chinese telecommunications companies like Huawei and ZTE, which have been the subject of restrictions from the Biden administration and the Federal Communications Commission in recent months. But she noted that Huawei is still in involved with regional carriers in the United States.

The FCC has addressed this concern by offering to compensate carriers to “rip and replace” that risky equipment. (Part of Huawei’s attraction is it’s relative low cost compared to its European rivals, for example.)

She noted that 5G “isn’t just another G” because there are many more connection and data points. Due to the promised lower latency, critical infrastructure like power grids and dams and even medical devices, can be controlled over the next-generation networks. Peterlin said these points of connection cannot be something the Chinese can get access to.

For Jamil Jaffer, founder and executive director of the National Security Institute, his concern is the pace at which the Chinese government is moving. “I worry that it’s very late in the process to be getting to this point, and they’re well ahead of us,” he said, speaking on China’s growing influence in the technology space.

Jennifer Hay, senior director for national security programs at DataRobot, a company that develops AI software, said Biden’s executive order should be able to expand to other platforms and empower the Commerce Department to look into what is going on behind the scenes on these applications.

She said the government needs to be able to make educated decisions about who’s using Americans’ data and what that data is being used for.

Hay even suggested that Congress step in and draft legislation on this kind of data collection, but Jaffer disagreed on the grounds that not only would a slow-moving government not be able to keep up with the rapid movement of technology, but legislation may impede business. He said this type of work is best left to the private sector to figure out.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending