Connect with us

Social Media

Seeking to Quell ‘Evil Contagion’ of ‘White Supremacy,’ President Trump May Ignite New Battle Over Online Hate Speech

Published

on

WASHINGTON, August 5, 2019 — President Donald Trump on Monday morning attempted to strike a tone of unity by denouncing the white, anti-Hispanic man who “shot and murdered 20 people, and injured 26 others, including precious little children.”

In speaking about the two significant mass shootings over the weekend in Texas and Ohio, Trump delivered prepared remarks in which he specifically denounced “racism, bigotry, and white supremacy,” and linked it to the “warp[ed] mind” of the racially-motivated El Paso killer.

That shooter – now in custody – posted a manifesto online before the shooting in which he said he was responding to the “Hispanic invasion of Texas.” The shooter cited the March 15, massacre of two mosques in Christchurch, New Zealand, as an inspiration for his action.

In White House remarks with Vice President Mike Pence standing at his side, Trump proposed solutions to “stop this evil contagion.” Trump denounced “hate” or “racist hate” four times.

Trump’s first proposed solution: “I am directing the Department of Justice to work in partnership with local, state, and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike.”

That proposal appeared to be an initiative that was either targeted at – or potentially an opportunity for collaboration with – social media giants like Twitter, Facebook and Google.

Indeed, Trump and others on the political right have repeatedly criticized these social media giants for bias against Trump and Republicans.

Sometimes, this right-wing criticism of Twitter emerges after a user is banned for violating the social media company’s terms of service against “hate speech.”

In Trump’s remarks, he also warned that “we must shine light on the dark recesses of the internet.” Indeed, Trump said that “the perils of the internet and social media cannot be ignored, and they will not be ignored.”

But it must be equally clear to the White House that the El Paso killer – in his online manifesto – used anti-Hispanic and anti-immigrant rhetoric very similar to Trump’s own repeated words about an “invasion” of Mexican and other Latin Americans at the United States border.

Hence this mass murder contains elements of political peril for both Donald Trump and for his frequent rivals at social media companies like Twitter, Facebook and Google.

8chan gets taken down by its network provider

Minutes before the El Paso attack at a Wal-Mart, a manifesto titled “The Inconvenient Truth” was posted to the online platform 8chan, claiming that the shooting was in response to the “Hispanic invasion.” The killer specifically cited the Christchurch shooter’s white supremacist manifesto as an inspiration.

As previously utilized by Islamic terrorists, social media platforms are increasingly being utilized by white supremacist terrorists. In addition to posting his manifesto online, the Christchurch shooter livestreamed his attack on Facebook.

In April, a man posted an anti-Semitic and white nationalist letter to the same online forum, 8chan, before opening fire at a synagogue near San Diego, California.

And on July 28, the gunman who killed three people at a garlic festival in Gilroy, California, allegedly promoted a misogynist white supremacist book on Instagram just prior to his attack.

But Saturday’s El Paso shooting motivated some companies to act. Cloudflare, 8chan’s network provider early on Monday morning pulled its support for 8chan, calling the platform a “cesspool of hate.”

“While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online,” wrote Cloudflare CEO Matthew Prince.

“It does nothing to address why mass shootings occur,” said Prince. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we’ve solved our own problem, but we haven’t solved the internet’s.”

Prince continued to voice his discomfort about the company taking the role of content arbitrator, and pointed to Europe’s attempts to have more government involvement.

The Christchurch massacre opened a dialogue between big tech and European critics of ‘hate speech’

Following the Christchurch attack, 18 governments in May signed the Christchurch Call pledge (PDF) seeking to stop the internet from being used as a tool by violent extremists. The U.S. did not sign on, and the White House voiced concerns that the document would violate the First Amendment.

Dubbed “The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online,” the May document included commitments by both online service providers, and by governments.

Among other measures, the online providers were to “[t]ake transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media.”

Governments were to “[e]nsure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content.”

Although Silicon Valley has had a reputation for supporting a libertarian view of free speech, the increasingly unruly world of social media over the past decade has put that First Amendment absolutism to the test.

Indeed, five big tech giants – Google, Amazon, Facebook, Twitter and Microsoft – voiced their support from the Christchurch Call on the day of its release.

In particular, they took responsibility for the apparent restrictions on freedom of speech that the Christchurch Call would impose, saying that the massacre was “a horrifying tragedy” that made it “right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence.”

In particular, they noted that the Christchurch Call expands on the Global Internet Forum to Counter Terrorism set up by Facebook, Google’s YouTube, Microsoft and Twitter in the summer of 2017.

The objective of this organization is focused on disrupting terrorists’ ability to promote terrorism, disseminate violent propaganda, and exploit or glorify real-world acts of violence.

The tech giants said (PDF) that they were sharing more information about how they could “detect and remove this content from our services, updates to our individual terms of use, and more transparency for content policies and removals.”

Will Trump politicize the concept of ‘hate speech’ that tech companies are uniting with Europe to take down?

In his Monday statement commenting on an ostensible partnership between the Justice Department and the social media companies, Trump referred to the need to the need to “detect mass shooters before they strike.”

And he had this specific example: “As an example, the monster in the Parkland high school in Florida had many red flags against him, and yet nobody took decisive action. Nobody did anything. Why not?”

Part of the challenge now faced by social media companies is frankly political. Although Twitter has taken aggressive steps to eradicate ISIS content from its platform, it has not applied the same tools and algorithms to take down white supremacist content.

Society accepts the risk of inconveniencing potentially related accounts, such as those of Arabic language broadcasters for the benefit of banning ISIS content, Motherboard summarized earlier this year based its interview with Twitter employees.

But if these same aggressive tactics were deployed against white nationalist terrorism, the algorithms would likely flag content from prominent Republican politicians, far-right commentators – and Donald Trump himself, these employees said.

Indeed, right after declining to sign the Christchurch call, the White House escalated its war against American social media by announcing a campaign asking internet users to share stories of when they felt censored by Facebook, Twitter and Google’s YouTube.

And in June, Twitter made it clear that they were speaking directly about Tweets that violated their terms of service by prominent public officials, including the president.

“In the past, we’ve allowed certain Tweets that violated our rules to remain on Twitter because they were in the public’s interest, but it wasn’t clear when and how we made those determinations,” a Twitter official said. “To fix that, we’re introducing a new notice that will provide additional clarity in these situations, and sharing more on when and why we’ll use it.”

White House officials did not immediately respond to whether the Trump administration was reconsidering its opposition to the Christchurch Call.

Will Trump’s speech put others in the spotlight, or keep it on him and his rhetoric?

In additional to highlighting the anticipated effort with social media, Trump had four additional suggested “bipartisan solutions” to the “evil contagion” caused by the Texas and Ohio mass shootings.

They including “stop[ing] the glorification of violence in our society” in video games, addressing mental health laws “to better identify mentally disturbed individuals,” keeping firearms from those “judged to pose a grave risk to public safety,” and seeking the death penalty against those who commit hate crimes and mass murders.

Trump’s advisers said that they hoped the speech would stem the tide of media attention being given to the links between his frequent use of dehumanizing language to describe Latin American immigrants.

As he delivered his prepared remarks from a TelePrompTer in a halting cadence, Trump appeared to be reading the speech for the first time. This led to an awkward moment when he suggested that the second shooting of the weekend – which had taken place outside a Dayton, Ohio bar – had been in Toledo, Ohio.

But despite displaying the visible discomfiture that is evident when he reads prepared remarks to the White House press pool cameras, Trump made an attempt to silence critics like former El Paso Congressman Beto O’Rourke – who just hours before had explicitly called the President a white nationalist – by calling for defeat of “sinister ideologies” of hate.

“In one voice, our nation must condemn racism, bigotry, and white supremacy,” Trump said. “Hate has no place in America. Hatred warps the mind, ravages the heart, and devours the soul.”

Trump did not elaborate on the hate-based motivations of the El Paso shooter. Rather than reflect on where the El Paso shooter may have gotten the idea that Hispanics were “invading” the United States, Trump cast blame on one of the targets often invoked by conservatives after such mass shootings, including video games.

Although Trump has previously delivered remarks in the aftermath of violent acts committed by white supremacists and white nationalists during his presidency, Monday’s speech marked the first time that the President had chosen to specifically condemn “white supremacy,” rather than deliver a more general condemnation of “hate.”

In his rhetoric, both on his Twitter account and on the campaign trail, Trump uses non-whites as a foil, beginning with his 2015 campaign announcement speech, in which he described Mexican immigrants as “rapists” who bring crime and drugs to America.

That rhetoric reappeared in the 2018 Congressional elections as Trump spoke about an “invasion” from South and Central America taking up a significant portion of his rally stump speech.

As the 2020 election draws nearer, Trump’s strategy this campaign seems to similarly demonize racial minorities and prominent Democrats of color, most recently Rep. Elijah Cummings, D-Md., the chairman of the House Oversight Committee.

Trump critics not appeased by his Monday speech

As a result, commentators said Monday’s condemnation of white supremacy marked a 180-degree turn for the President. But his performance did not leave many observers convinced of his sincerity.

House Homeland Security Committee Chairman Bennie Thompson, D-Miss., called the President’s speech “meaningless.”

“We know tragedy after tragedy his words have not led to solid action or any change in rhetoric. We know his vile and racist words have incited violence and attacks on Americans,” he said in a statement. “Now dozens are dead and white supremacist terrorism is on the rise and is now our top domestic terrorism threat.”

Sen. Ron Wyden, D-Ore., tweeted that Trump had “addressed the blaze today with the equivalent of a water balloon” after “fanning the flames of white supremacy for two-and-a-half years in the White House.”

Ohio Democratic Party Chairman David Pepper said Trump’s condemnation of white supremacy in Monday’s remarks could not make up for his years of racist campaign rhetoric.

“Through years of campaigning and hate rallies, to now say ‘I’m against hateful people and racism,’ is just hard to listen to,” Pepper said during a phone interview.

“Unless he’s willing to say ‘I know I’ve been a part of it’ with a full apology and some self recognition, it felt like he was just checking the boxes.”

Pepper suggested that Trump “was saying what someone told him to say,” and predicted that Trump would soon walk back his remarks, much as he did after the 2017 “Unite the Right” white supremacist rally in Virginia.

Charlie Sykes, a former conservative talk radio host and editor of “The Bulwark,” echoed Pepper’s sentiments in a separate phone interview, but also called out Trump for failing to speak of the El Paso shooter’s motivations.

“It was so perfunctory and inadequate because he condemned the words ‘bigotry and racism,’ but he didn’t describe what he was talking about,” Sykes said.

Sykes criticized Trump for failing to take responsibility for his routine use of racist rhetoric, including descriptions of immigrants as “invaders” who “infest” the United States.

“Unless you’re willing to discuss the dehumanization behind the crimes, the invocation of certain words doesn’t change anything.”

Another longtime GOP figure who Trump failed to impress was veteran strategist Rick Wilson, who cited it as yet the latest example of “the delta between Trump on the TelePrompTer and Trump at a rally,” a difference he described as “enormous.”

“Nothing about that speech had a ring of authenticity to it,” said Rick Wilson, a legendary GOP ad maker and the author of “Everything Trump Touches Dies.”

“The contrast between the speechwriter’s handiwork and the real Donald Trump…is rather marked,” he said.

Where does online free speech – and allegations of ‘hate crimes’ – go from here?

Although the social media companies are making more efforts to harness and expunge online hate, they are unlikely to be able to get very far without someone – perhaps even President Trump – crying foul.

Putting the politics of online hate speech aside, the U.S. does take a fundamentally different approach to freedom of expression than does Europe.

According to the Human Rights Watch, hundreds of French citizens are convicted for “apologies for terrorism” each year, which includes any positive comment about a terrorist or terrorist organization. Online offenses are treated especially harshly.

By contrast, the U.S. has a fundamental commitment to the freedom of speech—including speech that is indecent, offensive, and hateful.

The Supreme Court has ruled that speech is unprotected when it is “directed to inciting or producing imminent lawless action” and is “likely to incite or produce such action.”

But this exception is extremely narrow—in Brandenburg v. Ohio, the Court reversed the conviction of a KKK group that advocated for violence as a means of political reform, arguing that their statements did not express an immediate intent to do violence.

The limitations on government leave the responsibility of combating online extremism to the digital platforms themselves, said Open Technology Institute Director Sarah Morris at a panel last month.

“In general, private companies have a lot more flexibility in how they respond to terrorist propaganda than Congress does,” said Emma Llansó, Director of the Free Expression Project at the Center for Democracy & Technology. “They need to be clear about what their policies are and enforce them transparently.”

But companies also need to carefully consider how they will respond to pressure from governments and individuals around the world, said Llansó, adding that “no content policy or community guideline is ever applied just in the circumstances it was designed for.”

“As the experience of social media companies has shown us, content moderation is extremely difficult to do well,” Llansó concluded. “It requires an understanding of the context that the speaker and the audience are operating in, which a technical infrastructure provider is not likely to have.”

(Managing Editor Andrew Feinberg and Reporter Emily McPhie contributed reporting to this article. Photo of Vice President Pence beside Trump speaking on August 5, 2019, from the White House.)

Social Media

Americans Should Look to Filtration Software to Block Harmful Content from View, Event Hears

One professor said it is the only way to solve the harmful content problem without encroaching on free speech rights.

Published

on

Photo of Adam Neufeld of Anti-Defamation League, Steve Delbianco of NetChoice, Barak Richman of Duke University, Shannon McGregor of University of North Carolina (left to right)

WASHINGTON, July 21, 2022 – Researchers at an Internet Governance Forum event Thursday recommended the use of third-party software that filters out harmful content on the internet, in an effort to combat what they say are social media algorithms that feed them content they don’t want to see.

Users of social media sites often don’t know what algorithms are filtering the information they consume, said Steve DelBianco, CEO of NetChoice, a trade association that represents the technology industry. Most algorithms function to maximize user engagement by manipulating their emotions, which is particularly worrisome, he said.

But third-party software, such as Sightengine and Amazon’s Rekognition – which moderate what users see by bypassing images and videos that the user selects as objectionable – could act in place of other solutions to tackle disinformation and hate speech, said Barak Richman, professor of law and business at Duke University.

Richman argued that this “middleware technology” is the only way to solve this universal problem without encroaching on free speech rights. He suggested Americans in these technologies – that would be supported by popular platforms including Facebook, Google, and TikTok – to create the buffer between harmful algorithms and the user.

Such technologies already exist in limited applications that offer less personalization and accuracy in filtering, said Richman. But the market demand needs to increase to support innovation and expansion in this area.

Americans across party lines believe that there is a problem with disinformation and hate speech, but disagree on the solution, added fellow panelist Shannon McGregor, senior researcher at the Center for Information, Technology, and Public Life at the University of North Carolina.

The conversation comes as debate continues regarding Section 230, a provision in the Communications Decency Act that protects technology platforms from being liable for content their users post. Some say Section 230 only protects “neutral platforms,” while others claim it allows powerful companies to ignore user harm. Experts in the space disagree on the responsibility of tech companies to moderate content on their platforms.

Continue Reading

Free Speech

Experts Reflect on Supreme Court Decision to Block Texas Social Media Bill

Observers on a Broadband Breakfast panel offered differing perspectives on the high court’s decision.

Published

on

Parler CPO Amy Peikoff

WASHINGTON, June 2, 2022 – Experts hosted by Broadband Breakfast Wednesday were split on what to make of  the Supreme Court’s 5-4 decision to reverse a lower court order lifting a ban on a Texas social media law that would have made it illegal for certain large platforms to crack down on speech they deem reprehensible.

The decision keeps the law from taking affect until a full determination is made by a lower court.

Parler CPO Amy Peikoff

During a Broadband Live Online event on Wednesday, Ari Cohn, free speech counsel for tech lobbyist TechFreedom, argued that the bill “undermines the First Amendment to protect the values of free speech.

“We have seen time and again over the course of history that when you give the government power to start encroaching on editorial decisions [it will] never go away, it will only grow stronger,” he cautioned. “It will inevitably be abused by whoever is in power.”

Nora Benavidez, senior counsel and director of digital justice and civil rights for advocate Free Press, agreed with Cohn. “This is a state effort to control what private entities do,” she said Wednesday. “That is unconstitutional.

“When government attempts to invade into private action that is deeply problematic,” Benavidez continued. “We can see hundreds and hundreds of years of examples of where various countries have inserted themselves into private actions – that leads to authoritarianism, that leads to censorship.”

Different perspectives

Principal at McCollough Law Firm Scott McCollough said Wednesday  that he believed the law should have been allowed to stand.

“I agree the government should not be picking and choosing who gets to speak and who does not,” he said. “The intent behind the Texas statute was to prevent anyone from being censored – regardless of viewpoint, no matter what [the viewpoint] is.”

McCollough argued that this case was about which free speech values supersede the other – “those of the platforms, or those of the people who feel that they are being shut out from what is today the public square.

“In the end it will be a court that acts, and the court is also the state,” McCollough added. “So, in that respect, the state would still be weighing in on who wins and who loses – who gets to speak and who does not.”

Chief policy officer of social media platform Parler Amy Peikoff said Wednesday that her primary concern was “viewpoint discrimination in favor of the ruling elite.”

Peikoff was particularly concerned about coordination between state agencies and social media platforms to “squelch certain viewpoints.”

Peikoff clarified that she did not believe that the Texas law was the best vehicle to address these concerns, however, stating instead that lawsuits – preferably private ones – be used to remove the “censorious cancer,” rather than entangling a government entity in the matter.

“This cancer grows out of a partnership between government and social media to squelch discussion about certain viewpoints and perspectives.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, June 1, 2022, 12 Noon ET – BREAKING NEWS EVENT! – The Supreme Court, Social Media and the Culture Wars

The Supreme Court on Tuesday blocked a Texas law that would ban large social media companies from removing posts based on the views they express. Join us for this breaking news event of Broadband Breakfast Live Online in which we discuss the Supreme Court, social media and the culture wars.

Panelists:

  • Scott McCollough, Attorney, McCollough Law Firm
  • Amy Peikoff, Chief Policy Officer, Parler
  • Ari Cohn, Free Speech Counsel, TechFreedom
  • Nora Benavidez, Senior Counsel and Director of Digital Justice and Civil Rights at Free Press
  • Drew Clark (presenter and host), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Amy Peikoff is the Chief Policy Officer of Parler. After completing her Ph.D., she taught at universities (University of Texas, Austin, University of North Carolina, Chapel Hill, United States Air Force Academy) and law schools (Chapman, Southwestern), publishing frequently cited academic articles on privacy law, as well as op-eds in leading newspapers across the country on a range of issues. Just prior to joining Parler, she founded and was President of the Center for the Legalization of Privacy, which submitted an amicus brief in United States v. Facebook in 2019.

Ari Cohn is Free Speech Counsel at TechFreedom. A nationally recognized expert in First Amendment law, he was previously the Director of the Individual Rights Defense Program at the Foundation for Individual Rights in Education (FIRE), and has worked in private practice at Mayer Brown LLP and as a solo practitioner, and was an attorney with the U.S. Department of Education’s Office for Civil Rights. Ari graduated cum laude from Cornell Law School, and earned his Bachelor of Arts degree from the University of Illinois at Urbana-Champaign.

Nora Benavidez manages Free Press’s efforts around platform and media accountability to defend against digital threats to democracy. She previously served as the director of PEN America’s U.S. Free Expression Programs, where she guided the organization’s national advocacy agenda on First Amendment and free-expression issues, including press freedom, disinformation defense and protest rights. Nora launched and led PEN America’s media-literacy and disinformation-defense program. She also led the organization’s groundbreaking First Amendment lawsuit, PEN America v. Donald Trump, to hold the former president accountable for his retaliation against and censorship of journalists he disliked.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

Photo of the Supreme Court from September 2020 by Aiva.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Section 230

Narrow Majority of Supreme Court Blocks Texas Law Regulating Social Media Platforms

The decision resulted in an unusual court split. Justice Kagan sided with Justice Alito but refused to sign his dissent.

Published

on

Caricature of Samuel Alito by Donkey Hotey used with permission

WASHINGTON, May 31, 2022 – On a narrow 5-4 vote, the Supreme Court of the United States on Tuesday blocked a Texas law that Republicans had argued would address the “censorship” of conservative voices on social media platforms.

Texas H.B. 20 was written by Texas Republicans to combat perceived bias against conservative viewpoints voiced on Facebook, Twitter, and other social media platforms with at least 50 million active monthly users.

Watch Broadband Breakfast Live Online on Wednesday, June 1, 2022

Broadband Breakfast on June 1, 2022 — The Supreme Court, Social Media and the Culture Wars

The bill was drafted at least in part as a reaction to President Donald Trump’s ban from social media. Immediately following the January 6 riots at the United States Capitol, Trump was simultaneously banned on several platforms and online retailers, including Amazon, Facebook, Twitter, Reddit, and myriad other websites.

See also Explainer: With Florida Social Media Law, Section 230 Now Positioned In Legal Spotlight, Broadband Breakfast, May 25, 2021

Close decision on First Amendment principles

A brief six-page dissent on the matter was released on Tuesday. Conservative Justices Samuel Alito, Neil Gorsuch, and Clarence Thomas dissented, arguing that the law should have been allowed to stand. Justice Elena Kagan also agreed that the law should be allowed to stand, though she did not join Alito’s penned dissent and did not elaborate further.

The decision was on an emergency action to vacate a one-sentence decision of the Fifth Circuit Court of Appeals. The appeals court had reversed a prior stay by a federal district court. In other words, the, the law passed by the Texas legislature and signed by Gov. Greg Abbott is precluded from going into effect.

Tech lobbying group NetChoice – in addition to many entities in Silicon Valley – argued that the law would prevent social media platforms from moderating and addressing hateful and potentially inflammatory content.

In a statement, Computer & Communications Industry Association President Matt Schruers said, “We are encouraged that this attack on First Amendment rights has been halted until a court can fully evaluate the repercussions of Texas’s ill-conceived statute.”

“This ruling means that private American companies will have an opportunity to be heard in court before they are forced to disseminate vile, abusive or extremist content under this Texas law. We appreciate the Supreme Court ensuring First Amendment protections, including the right not to be compelled to speak, will be upheld during the legal challenge to Texas’s social media law.”

In a statement, Public Knowledge Legal Director John Bergmayer said, “It is good that the Supreme Court blocked HB 20, the Texas online speech regulation law. But it should have been unanimous. It is alarming that so many policymakers, and even Supreme Court justices, are willing to throw out basic principles of free speech to try to control the power of Big Tech for their own purposes, instead of trying to limit that power through antitrust and other competition policies. Reining in the power of tech giants does not require abandoning the First Amendment.”

In his dissent, Alito pointed out that the plaintiffs argued “HB 20 interferes with their exercise of ‘editorial discretion,’ and they maintain that this interference violates their right ‘not to disseminate speech generated by others.’”

“Under some circumstances, we have recognized the right of organizations to refuse to host the speech of others,” he said, referencing Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc.

“But we have rejected such claims in other circumstances,” he continued, pointing to PruneYard Shopping Center v. Robins.

Will Section 230 be revamped on a full hearing by the Supreme Court?

“It is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies, but Texas argues that its law is permissible under our case law,” Alito said.

Alito argued that there is a distinction between compelling a platform to host a message and refraining from discriminating against a user’s speech “on the basis of viewpoint.” He said that H.B. 20 adopted the latter approach.

Alito went on, arguing that the bill only applied to “platforms that hold themselves out as ‘open to the public,’” and “neutral forums for the speech of others,” and thus, the targeting platforms are not spreading messages they endorse.

Alito added that because the bill only targets platforms with more than 50 million users, it only targets entities with “some measure of common carrier-like market power and that this power gives them an ‘opportunity to shut out [disfavored] speakers.’”

Justices John Roberts, Stephen Breyer, Sonya Sotomayor, Brett Kavanaugh, and Amy Coney Barrett all voted affirmatively – siding with NetChoice LLC’s emergency application – to block H.B. 20 from being enforced.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending