WASHINGTON, August 5, 2019 — President Donald Trump on Monday morning attempted to strike a tone of unity by denouncing the white, anti-Hispanic man who “shot and murdered 20 people, and injured 26 others, including precious little children.”
In speaking about the two significant mass shootings over the weekend in Texas and Ohio, Trump delivered prepared remarks in which he specifically denounced “racism, bigotry, and white supremacy,” and linked it to the “warp[ed] mind” of the racially-motivated El Paso killer.
That shooter – now in custody – posted a manifesto online before the shooting in which he said he was responding to the “Hispanic invasion of Texas.” The shooter cited the March 15, massacre of two mosques in Christchurch, New Zealand, as an inspiration for his action.
In White House remarks with Vice President Mike Pence standing at his side, Trump proposed solutions to “stop this evil contagion.” Trump denounced “hate” or “racist hate” four times.
Trump’s first proposed solution: “I am directing the Department of Justice to work in partnership with local, state, and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike.”
That proposal appeared to be an initiative that was either targeted at – or potentially an opportunity for collaboration with – social media giants like Twitter, Facebook and Google.
Indeed, Trump and others on the political right have repeatedly criticized these social media giants for bias against Trump and Republicans.
Sometimes, this right-wing criticism of Twitter emerges after a user is banned for violating the social media company’s terms of service against “hate speech.”
In Trump’s remarks, he also warned that “we must shine light on the dark recesses of the internet.” Indeed, Trump said that “the perils of the internet and social media cannot be ignored, and they will not be ignored.”
But it must be equally clear to the White House that the El Paso killer – in his online manifesto – used anti-Hispanic and anti-immigrant rhetoric very similar to Trump’s own repeated words about an “invasion” of Mexican and other Latin Americans at the United States border.
Hence this mass murder contains elements of political peril for both Donald Trump and for his frequent rivals at social media companies like Twitter, Facebook and Google.
8chan gets taken down by its network provider
Minutes before the El Paso attack at a Wal-Mart, a manifesto titled “The Inconvenient Truth” was posted to the online platform 8chan, claiming that the shooting was in response to the “Hispanic invasion.” The killer specifically cited the Christchurch shooter’s white supremacist manifesto as an inspiration.
As previously utilized by Islamic terrorists, social media platforms are increasingly being utilized by white supremacist terrorists. In addition to posting his manifesto online, the Christchurch shooter livestreamed his attack on Facebook.
In April, a man posted an anti-Semitic and white nationalist letter to the same online forum, 8chan, before opening fire at a synagogue near San Diego, California.
And on July 28, the gunman who killed three people at a garlic festival in Gilroy, California, allegedly promoted a misogynist white supremacist book on Instagram just prior to his attack.
But Saturday’s El Paso shooting motivated some companies to act. Cloudflare, 8chan’s network provider early on Monday morning pulled its support for 8chan, calling the platform a “cesspool of hate.”
“While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online,” wrote Cloudflare CEO Matthew Prince.
“It does nothing to address why mass shootings occur,” said Prince. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we’ve solved our own problem, but we haven’t solved the internet’s.”
Prince continued to voice his discomfort about the company taking the role of content arbitrator, and pointed to Europe’s attempts to have more government involvement.
The Christchurch massacre opened a dialogue between big tech and European critics of ‘hate speech’
Following the Christchurch attack, 18 governments in May signed the Christchurch Call pledge (PDF) seeking to stop the internet from being used as a tool by violent extremists. The U.S. did not sign on, and the White House voiced concerns that the document would violate the First Amendment.
Dubbed “The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online,” the May document included commitments by both online service providers, and by governments.
Among other measures, the online providers were to “[t]ake transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media.”
Governments were to “[e]nsure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content.”
Although Silicon Valley has had a reputation for supporting a libertarian view of free speech, the increasingly unruly world of social media over the past decade has put that First Amendment absolutism to the test.
Indeed, five big tech giants – Google, Amazon, Facebook, Twitter and Microsoft – voiced their support from the Christchurch Call on the day of its release.
In particular, they took responsibility for the apparent restrictions on freedom of speech that the Christchurch Call would impose, saying that the massacre was “a horrifying tragedy” that made it “right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence.”
In particular, they noted that the Christchurch Call expands on the Global Internet Forum to Counter Terrorism set up by Facebook, Google’s YouTube, Microsoft and Twitter in the summer of 2017.
The objective of this organization is focused on disrupting terrorists’ ability to promote terrorism, disseminate violent propaganda, and exploit or glorify real-world acts of violence.
Will Trump politicize the concept of ‘hate speech’ that tech companies are uniting with Europe to take down?
In his Monday statement commenting on an ostensible partnership between the Justice Department and the social media companies, Trump referred to the need to the need to “detect mass shooters before they strike.”
And he had this specific example: “As an example, the monster in the Parkland high school in Florida had many red flags against him, and yet nobody took decisive action. Nobody did anything. Why not?”
Part of the challenge now faced by social media companies is frankly political. Although Twitter has taken aggressive steps to eradicate ISIS content from its platform, it has not applied the same tools and algorithms to take down white supremacist content.
Society accepts the risk of inconveniencing potentially related accounts, such as those of Arabic language broadcasters for the benefit of banning ISIS content, Motherboard summarized earlier this year based its interview with Twitter employees.
But if these same aggressive tactics were deployed against white nationalist terrorism, the algorithms would likely flag content from prominent Republican politicians, far-right commentators – and Donald Trump himself, these employees said.
Indeed, right after declining to sign the Christchurch call, the White House escalated its war against American social media by announcing a campaign asking internet users to share stories of when they felt censored by Facebook, Twitter and Google’s YouTube.
And in June, Twitter made it clear that they were speaking directly about Tweets that violated their terms of service by prominent public officials, including the president.
“In the past, we’ve allowed certain Tweets that violated our rules to remain on Twitter because they were in the public’s interest, but it wasn’t clear when and how we made those determinations,” a Twitter official said. “To fix that, we’re introducing a new notice that will provide additional clarity in these situations, and sharing more on when and why we’ll use it.”
White House officials did not immediately respond to whether the Trump administration was reconsidering its opposition to the Christchurch Call.
Will Trump’s speech put others in the spotlight, or keep it on him and his rhetoric?
In additional to highlighting the anticipated effort with social media, Trump had four additional suggested “bipartisan solutions” to the “evil contagion” caused by the Texas and Ohio mass shootings.
They including “stop[ing] the glorification of violence in our society” in video games, addressing mental health laws “to better identify mentally disturbed individuals,” keeping firearms from those “judged to pose a grave risk to public safety,” and seeking the death penalty against those who commit hate crimes and mass murders.
Trump’s advisers said that they hoped the speech would stem the tide of media attention being given to the links between his frequent use of dehumanizing language to describe Latin American immigrants.
As he delivered his prepared remarks from a TelePrompTer in a halting cadence, Trump appeared to be reading the speech for the first time. This led to an awkward moment when he suggested that the second shooting of the weekend – which had taken place outside a Dayton, Ohio bar – had been in Toledo, Ohio.
But despite displaying the visible discomfiture that is evident when he reads prepared remarks to the White House press pool cameras, Trump made an attempt to silence critics like former El Paso Congressman Beto O’Rourke – who just hours before had explicitly called the President a white nationalist – by calling for defeat of “sinister ideologies” of hate.
“In one voice, our nation must condemn racism, bigotry, and white supremacy,” Trump said. “Hate has no place in America. Hatred warps the mind, ravages the heart, and devours the soul.”
Trump did not elaborate on the hate-based motivations of the El Paso shooter. Rather than reflect on where the El Paso shooter may have gotten the idea that Hispanics were “invading” the United States, Trump cast blame on one of the targets often invoked by conservatives after such mass shootings, including video games.
Although Trump has previously delivered remarks in the aftermath of violent acts committed by white supremacists and white nationalists during his presidency, Monday’s speech marked the first time that the President had chosen to specifically condemn “white supremacy,” rather than deliver a more general condemnation of “hate.”
In his rhetoric, both on his Twitter account and on the campaign trail, Trump uses non-whites as a foil, beginning with his 2015 campaign announcement speech, in which he described Mexican immigrants as “rapists” who bring crime and drugs to America.
That rhetoric reappeared in the 2018 Congressional elections as Trump spoke about an “invasion” from South and Central America taking up a significant portion of his rally stump speech.
As the 2020 election draws nearer, Trump’s strategy this campaign seems to similarly demonize racial minorities and prominent Democrats of color, most recently Rep. Elijah Cummings, D-Md., the chairman of the House Oversight Committee.
Trump critics not appeased by his Monday speech
As a result, commentators said Monday’s condemnation of white supremacy marked a 180-degree turn for the President. But his performance did not leave many observers convinced of his sincerity.
House Homeland Security Committee Chairman Bennie Thompson, D-Miss., called the President’s speech “meaningless.”
“We know tragedy after tragedy his words have not led to solid action or any change in rhetoric. We know his vile and racist words have incited violence and attacks on Americans,” he said in a statement. “Now dozens are dead and white supremacist terrorism is on the rise and is now our top domestic terrorism threat.”
Sen. Ron Wyden, D-Ore., tweeted that Trump had “addressed the blaze today with the equivalent of a water balloon” after “fanning the flames of white supremacy for two-and-a-half years in the White House.”
Ohio Democratic Party Chairman David Pepper said Trump’s condemnation of white supremacy in Monday’s remarks could not make up for his years of racist campaign rhetoric.
“Through years of campaigning and hate rallies, to now say ‘I’m against hateful people and racism,’ is just hard to listen to,” Pepper said during a phone interview.
“Unless he’s willing to say ‘I know I’ve been a part of it’ with a full apology and some self recognition, it felt like he was just checking the boxes.”
Pepper suggested that Trump “was saying what someone told him to say,” and predicted that Trump would soon walk back his remarks, much as he did after the 2017 “Unite the Right” white supremacist rally in Virginia.
Charlie Sykes, a former conservative talk radio host and editor of “The Bulwark,” echoed Pepper’s sentiments in a separate phone interview, but also called out Trump for failing to speak of the El Paso shooter’s motivations.
“It was so perfunctory and inadequate because he condemned the words ‘bigotry and racism,’ but he didn’t describe what he was talking about,” Sykes said.
Sykes criticized Trump for failing to take responsibility for his routine use of racist rhetoric, including descriptions of immigrants as “invaders” who “infest” the United States.
“Unless you’re willing to discuss the dehumanization behind the crimes, the invocation of certain words doesn’t change anything.”
Another longtime GOP figure who Trump failed to impress was veteran strategist Rick Wilson, who cited it as yet the latest example of “the delta between Trump on the TelePrompTer and Trump at a rally,” a difference he described as “enormous.”
“Nothing about that speech had a ring of authenticity to it,” said Rick Wilson, a legendary GOP ad maker and the author of “Everything Trump Touches Dies.”
“The contrast between the speechwriter’s handiwork and the real Donald Trump…is rather marked,” he said.
Where does online free speech – and allegations of ‘hate crimes’ – go from here?
Although the social media companies are making more efforts to harness and expunge online hate, they are unlikely to be able to get very far without someone – perhaps even President Trump – crying foul.
Putting the politics of online hate speech aside, the U.S. does take a fundamentally different approach to freedom of expression than does Europe.
According to the Human Rights Watch, hundreds of French citizens are convicted for “apologies for terrorism” each year, which includes any positive comment about a terrorist or terrorist organization. Online offenses are treated especially harshly.
By contrast, the U.S. has a fundamental commitment to the freedom of speech—including speech that is indecent, offensive, and hateful.
The Supreme Court has ruled that speech is unprotected when it is “directed to inciting or producing imminent lawless action” and is “likely to incite or produce such action.”
But this exception is extremely narrow—in Brandenburg v. Ohio, the Court reversed the conviction of a KKK group that advocated for violence as a means of political reform, arguing that their statements did not express an immediate intent to do violence.
The limitations on government leave the responsibility of combating online extremism to the digital platforms themselves, said Open Technology Institute Director Sarah Morris at a panel last month.
“In general, private companies have a lot more flexibility in how they respond to terrorist propaganda than Congress does,” said Emma Llansó, Director of the Free Expression Project at the Center for Democracy & Technology. “They need to be clear about what their policies are and enforce them transparently.”
But companies also need to carefully consider how they will respond to pressure from governments and individuals around the world, said Llansó, adding that “no content policy or community guideline is ever applied just in the circumstances it was designed for.”
“As the experience of social media companies has shown us, content moderation is extremely difficult to do well,” Llansó concluded. “It requires an understanding of the context that the speaker and the audience are operating in, which a technical infrastructure provider is not likely to have.”
(Managing Editor Andrew Feinberg and Reporter Emily McPhie contributed reporting to this article. Photo of Vice President Pence beside Trump speaking on August 5, 2019, from the White House.)
Vague Social Media Laws Create Fear in the Middle East. Can Encryption Tools Help?
Experts discuss how social media is being treated in the Middle East and how to respond.
WASHINGTON, January 25, 2022 – Far from being the savior of democracy in the Middle East, four experts said Monday that social media, and government regulation of it, is beginning to hurt civil rights activists.
The world is witnessing an increase in laws restricting social media access and hence regulating freedom of speech, especially in the Middle East, agreed the panelists, speaking at a Brookings Institution event.
Dina Hussein, the head of counterterrorism and dangerous organizations for Europe, the Middle East, and Africa at Facebook, and Chris Meserole, a senior fellow at the Brookings Institution, stated that too many countries are passing vague laws about what is and isn’t allowed on social media.
These new laws are purposefully unclear, they said. This new strategy has made it easier for the government to take down posts and restrict critics’ internet access while leaving up the posts of supporters and government officials.
These laws also spread fear within the regime because the vagueness puts anyone at risk of being arrested for something they post, they said.
When asked what can be done, Hussein said that Facebook promotes honesty through a website that focuses on Facebook’s own transparency and raises awareness of other countries’ laws for their users. In addition, Facebook is personally working to support civil rights activists in the areas of the world that are implementing such laws, Hussein said.
Encryption to avoid surveillance
Meserole said that democratic governments should not be fighting “fire with fire.” Instead, he wants civil rights groups in the Middle East to strengthen their ability to operate without social media. Many activists rely on social media to build their bases and spread their message. So, Meserole emphasized that as the authoritarian regimes increase their abilities to watch, manipulate, and censor social media, democratic governments should invest in technology that will help those who are fighting for civil rights encrypt their media or work outside of the surveillance of government.
Another concern of the guest speakers was the rise in online misinformation and the trend of authoritarian regimes making new accounts to promote their message rather than trying to censor the language of the opposition.
Some people wonder why these groups don’t just eliminate media within their countries. Meserole’s answer is that the government has it is own various benefits to having social media, and so they pass vague internet laws that allow them to have more legal control instead.
Former GOP Congressman and UK MP Highlight Dangers of Disinformation and Urge Regulation
Will Hurd and Member of Parliament Damien Collins say disinformation on social media platforms a worry in midterm elections.
WASHINGTON, January 11, 2022 – Former Republican Rep. Will Hurd said that disinformation campaigns could have a very concerning effect on the upcoming midterm elections.
He and the United Kingdom’s Member of Parliament Damien Collins urged new measures to hold tech and social media companies accountable for disinformation.
Hurd particularly expressed concern about how disinformation sows doubts about the legitimacy of the elections and effective treatments to the COVID-19 virus. The consequences of being misinformed on these topics is quite significant, he and Collins said Tuesday during a webinar hosted by the Washington Post.
The Texan Hurd said that the American 2020 election was the most secure the nation has ever had, and yet disinformation around it led to the insurrection at the Capitol.
The British Collins agreed that democratic elections are particularly at risk. Some increased risk comes from ever-present disinformation around COVID and its effects on public health and politics. “A lack of regulation online has left too many people vulnerable to abuse, fraud, violence, and in some cases even loss of life,” he said.
In regulating tech and media companies, Collins said citizens are reliant on whistleblowers, investigative journalists, and self-serving reports from companies that manipulate their data.
Unless government gets involved, they said, the nation will remain ignorant of the spread of disinformation.
Tech companies need to increase their transparency, even though that is something they are struggling to do.
Yet big tech companies are constantly conducting research and surveillance on their audience, the performance of their services, and the effect of their platforms. Yet they fail to share this information with the public, and he said that the public has a right to know the conclusions of these companies’ research.
In addition to increasing transparency and accountability, many lawmakers are attempting to grapple with the spread of disinformation. Some propose various changes to Section 230 of the Telecom Act of 1996.
Hurd said that the issues surrounding Section 230 will not be resolved before the midterm elections, and he recommended that policy-makers take steps outside of new legislation.
For example, the administration of President Joe Biden could lead its own federal reaction to misinformation to help citizens differentiate between fact and fiction, said Hurd.
Greene, Paul Social Media Developments Resurface Section 230 Debate
Five days into the new year and two developments bring Section 230 protections back into focus.
WASHINGTON, January 5, 2022 – The departure of Republican Kentucky Senator Rand Paul from YouTube and the banning of Georgia Republican Representative Marjorie Taylor Greene from Twitter at the beginning of a new year has rekindled a still lit flame of what lawmakers will do about Section 230 protections for Big Tech.
Paul removed himself Monday from the video-sharing platform after getting two strikes on his channel for violating the platform’s rules on Covid-19 misinformation, saying he is “[denying] my content to Big Tech…About half of the public leans right. If we all took our messaging to outlets of free exchange, we could cripple Big Tech in a heartbeat.”
Meanwhile, Greene has been permanently suspended from Twitter following repeated violations of Twitter’s terms of service. She has previously been rebuked by both her political opponents and allies for spreading fake news and mis/disinformation since she was elected in 2020. Her rap sheet includes being accused of spreading conspiracy theories promoting white supremacy and antisemitism.
It was ultimately the spreading of Covid-19 misinformation that got Greene permanently banned from Twitter on Sunday. She had received at least three previous “strikes” related to Covid-19 misinformation, according to New York Times. Greene received a fifth strike on Sunday, which resulted in her account’s permanent suspension.
Just five days into the new year, Greene’s situation – and the quickly-followed move by Paul – has reignited the tinderbox that is Section 230 of the Communications Decency Act, which shields big technology platforms from any liability from posts by their users.
As it stands now, Twitter is well within its rights to delete or suspend the accounts of any person who violates its terms of service. The right to free speech that is protected by the First Amendment does not prevent a private corporation, such as Twitter, from enforcing their rules.
In response to her Tweets, Texas Republican Congressman Dan Crenshaw called Greene a “liar and an idiot.” His comments notwithstanding, Crenshaw, like many conservative legislators, has argued that social media companies have become an integral part of the public forum and thus should not have the authority to unilaterally ban or censor voices on their platforms.
Some states, such as Texas and Florida, have gone as far as making it illegal for companies to ban political figures. Though Florida’s bill was quickly halted in the courts, that did not stop Texas from trying to enact similar laws (though they were met with similar results).
Crenshaw himself has proposed federal amendments to Section 230 for any “interactive computer service” that generates $3 billion or more in annual revenue or has 300 million or more monthly users.
The bill – which is still being drafted and does not have an official designation – would allow users to sue social media platforms for the removal of legal content based on political views, gender, ethnicity, and race. It would also make it illegal for these companies to remove any legal, user generated content from their website.
Under Crenshaw’s bill, a company such as Facebook or Twitter could be compelled to host any legal speech – objectionable or otherwise – at the risk of being sued. This includes overtly racist, sexist, or xenophobic slurs and rhetoric. While a hosting website might be morally opposed to being party to such kinds of speech, if said speech is not explicitly illegal, it would thus be protected from removal.
While Crenshaw would amend Section 230, other conservatives have advocated for its wholesale repeal. Sen. Lindsey Graham, R-South Carolina, put forward Senate Bill 2972 which would do just that. If passed, the law would go into effect on the first day of 2024, with no replacement or protections in place to replace it.
Consequences of such legislation
This is a nightmare scenario for every company with an online presence that can host user generate content. If a repeal bill were to pass with no replacement legislation in place, every online company would suddenly become directly responsible for all user content hosted on their platforms.
With the repeal of Section 230, websites would default to being treated as publishers. If users upload illegal content to a website, it would be as if the company published the illegal content themselves.
This would likely exacerbate the issue of alleged censorship that Republicans are concerned about. The sheer volume of content generated on platforms like Reddit and YouTube would be too massive for a human moderating team to play a role in.
Companies would likely be forced to rely on heavier handed algorithms and bots to censor anything that could open them to legal liability.
Republicans are not alone in their criticism of Section 230, however. Democrats have also flirted with amending or abolishing Section 230, albeit for very different reasons.
Many Democrats believe that Big Tech uses Section 230 to deflect responsibility, and that if they are afforded protections by it, they will not adjust their content moderation policies to mitigate allegedly dangerous or hateful speech posted online by users with real-world consequences.
Some Democrats have written bills that would carve out numerous exemptions to Section 230. Some seek to address the sale of firearms online, others focus on the spread of Covid-19 misinformation.
Some Democrats have also introduced the Safe Tech Act, which would hold companies accountable for failing to “remove, restrict access to or availability of, or prevent dissemination of material that is likely to cause irreparable harm.”
The reality right now is that two parties are diametrically opposed on the issue of Section 230.
While Republicans believe there is unfair content moderation that disproportionately censors conservative voices, Democrats believe that Big Tech is not doing enough to moderate their content and keep users safe.
- Fear of Big Tech in Auto Industry, Montana Hires Lightbox, USTelecom Hires Media Affairs Director
- Vague Social Media Laws Create Fear in the Middle East. Can Encryption Tools Help?
- With State Plan and Federal Funds, California in Good Position to Close Digital Divide
- AT&T Speeds Tiers, Wisconsin Governor on Broadband Assistance, Broadband as Public Utility
- Biden Encourages House to Pass Technology Innovation Funding Bill
- Federal Communications Commission Implements Rules for Affordable Connectivity Program
Signup for Broadband Breakfast
Broadband Roundup4 months ago
Cox’s Wireless Deal with Verizon Dies, Apple Appeals Epic Games Case, AT&T’s Fiber Investment
Broadband Roundup4 months ago
AT&T Hurricane Survey, FCC Announces $1.1B from Emergency Connectivity Fund, Comcast’s Utah Plans
Broadband Roundup4 months ago
Facebook Changes and Second Whistleblower, Comcast’s Spam Call Feature, AT&T Picks Ericsson for 5G
Broadband Roundup4 months ago
O’Rielly ‘Perplexed’ By Delay in Rosenworcel Decision, China Mobile Domesticating Contracts, AT&T Partners with Frontier
Expert Opinion4 months ago
Mike Harris: Investing in Open Access Fiber Optics is Investing in the Future
Spectrum3 months ago
More Experts Weigh In On Possibility 12 GHz Band Can Be Shared with 5G Services
Artificial Intelligence1 month ago
Henry Kissinger: AI Will Prompt Consideration of What it Means to Be Human
Broadband's Impact4 months ago
Steve Lacoff: A New Standard for the ‘Cloudification’ of Communications Services