Connect with us

Section 230

Parler, Gab, and Section 230: Right-Leaning Social Networks Push Alternative to Twitter and Facebook

Published

on

Photo of Sen. Ted Cruz by Gage Skidmore used with permission

July 7, 2020 — Many conservatives have long accused popular social media platforms of discriminating against their ideas, but that feeling reached a new urgency in late May when Twitter flagged several of President Donald Trump’s tweets for touting unsubstantiated claims and glorifying violence.

The move sparked outrage from some on the right. Some Republican users said that Twitter had treated other similarly controversial posts differently.

Shortly after, Trump called for the revocation of Section 230 of the Communications Decency Act, which states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Don’t miss Broadband Breakfast Live Online on Wednesday, July 8, 2020 — “Section 230 in an Election Year: How Republicans and Democrats are Approaching Proposed Changes.” This event is part of a three-part event series,“Section 230: Separating Fact From Fiction.”

Put simply, the law protects online platforms from liability for the content their users post. In the event of an immediate threat or other illegal event on the website, the only person that can be held responsible for the crime is the individual user.

Shortly after he called for the statute’s repeal, Trump signed an executive order that attempted to restrict Section 230’s protections by potentially withholding federal funds from tech companies that engage in “viewpoint discrimination, deception to consumers, or other bad practices.”

The order was met with skepticism from many digital policy experts.

Michael Petricone, senior vice president of government affairs for the Consumer Technology Association, said it was “not only ill-considered, it is unconstitutional.”

“For the past few months, people… have relied on online platforms to connect and communicate,” he said. “It was Section 230 and the free speech protections enjoyed by online platforms that enabled their success and subsequently, their ability to support struggling Americans during the pandemic.”

The legal footing on which the order stands is unclear, but frustration at alleged incidents of bias have continued to grow.

Parler’s ‘free speech’ community standards

Several conservative commentators called for a conservative exodus from Twitter.

In its stead, some have migrated to platforms like Parler, which claims to offer users an escape from alleged anti-conservative bias.

Public figures like Sen. Ted Cruz, R-Texas, and Fox News anchor Sean Hannity made Parler accounts that garnered hundreds of thousands of followers in mere days. Some, like Sen. Mike Lee, R-Utah, had previously made accounts but only recently resumed use of them.

Big tech companies like Facebook and Twitter “have an unparalleled ability to shape what Americans see and hear and ultimately think,” Cruz wrote in a post to the site. “And they use that power to silence conservatives and to promote their radical leftwing agenda.”

Parler aims to be “a non-biased free speech driven entity,” and provides examples of Supreme Court outcomes and Federal Communications Commission media regulations to justify much of what they deem off-limits.

However, the platform still removes posts and even users that run afoul of content moderation policies, including legally-protected content like crass speech, pornography and spam — all behaviors that are permitted on Twitter, provided that pornography is tagged as “sensitive”.

In a post last week, Parler CEO John Matze warned users that such actions would not be tolerated on the website, and that if a user were in doubt about what is acceptable to post, he should “ask [himself] if [he] would say it on the streets of New York or national television.”

Further, the platform’s level of unacceptable content has caused the company to ask members of its userbase to sift through the potentially violent, pornographic, incestuous, bestial or otherwise undesirable content two hours a day without compensation, promising future payment of an unspecified amount.

What Parler does clearly permit is virally-shared false content. QAnon conspiracy theories and phraseology are common on the site, as are incendiary claims directed at political opponents.

One such post claims that Rep. Ilhan Omar, D-M.N., called for “all white men [to] be put in chains” in a post on Twitter. No such tweet exists.

One can find myriad examples of harmful behavior on mainstream platforms like Twitter and Facebook as well. But ultimately, both their and Parler’s ability to decide which legal content to leave untouched is protected by the very piece of legislation that the president, whom many of Parler’s users and investors support, is trying to repeal. 

Gab’s looser guidelines on content

Other alternative social media networks, like Gab, allow far more content on their platforms than Parler. On Gab, users follow loose content guidelines that permit almost anything that is not copyright infringement, impersonation, unlawful threatening, and obscene or pornographic, although nudity for “protest or for educational/medical reasons” is permitted.

Because of the lax restrictions, CEO Andrew Torba said, “Gab is an online refuge for anyone who wishes to speak freely and securely away from the tyranny of Silicon Valley.”

However, such a refuge has led to communities centered around ethnic hatred that are generally banned on Parler, as well as the more “mainstream” and popular social networks Twitter and Facebook. In 2018, Gab user Robert Bowers sparked notoriety for the online forum when he posted on Gab that he was “going in,” before entering a synagogue to murder 11 people and leave several others injured.

Before the shooting, Bowers’ account was replete with anti-Semitic content. He railed against “Zionist Operated Governments” and the Hebrews Immigrant Aid Society, which assists refugees, and warnedof a “kike infestation.” The posts and Bowers’ account were removed following the shooting.

Torba claimed that journalists who accuse Gab of being a safe haven for white supremacists and radicals are “Marxist propagandists and proven liars,” and that there are numerous healthy groups on the site.

He also said that in the wake of Twitter’s response to Trump’s tweets, the platform has seen a spike in membership to over four million users.

“We saw an increase of 100,000 new daily active users join the Gab community in the past week alone,” he said. “In June, we brought on 200,000 new and returning users after the President’s tweets started getting ‘fact-checked’ by Twitter.”

Despite Gab’s history, its community guidelines are closer to reflecting its stated vision and seemingly less arbitrarily enforced than Parler’s. Torba said that because Parler is available in Apple’s App Store and Google Play (both of which have barred Gab), they are required to enforce not only their own community guidelines but also the guidelines of their providers.

“[This] is likely why Parler is already mass banning users,” he said. “From what I’ve seen, Twitter has more free speech than Parler does.”

In an interview with Pastor Rick Wiles of TruNews (who has warned of a “brown invasion” of the United States and referred to the impeachment of President Donald Trump as a “Jew coup”), Torba said he refuses to “bend the knee” to those who wish to see Gab fail, and that he sees his work as eternally important.

“Hey, if Christ can get up on that cross,” he said, “I can pick up the cross and do what I do. That’s the way I see it.”

Gab CEO supports Section 230

Torba is vocally pro-Section 230. In a Gab News post, he championed the right of private companies to “moderate their platform as they see fit.”

“They can ban anyone for any reason,” he continued. “They can have a rule that says no one can post videos of their cat anymore if they so choose and they can certainly decide to.”

However, Torba also contended that Section 230’s protections do not extend to Twitter or Facebook’s fact-checking practices.

“When Big Tech platforms “fact-check” posts or “editorialize” content, Section 230 does not apply to that speech,” he said. “That speech is them speaking, not a user. Section 230 does not prevent them from being held liable for their own speech.”

Other right-leaning figures, like David Harsanyi of the National Review, have argued that while Twitter’s decision to mark Trump’s tweets as false or glorifying violence will only fuel accusations of bias, it is within the platform’s legal right to do so.

“No American, not even the president, has an inherent right to a social media account,” he said. “Tech companies such as Facebook and Twitter are free to ban any user they see fit. They’re free to accuse Donald Trump — and only Trump, if they see fit — of being a liar, even if they shouldn’t.”

Parler’s Matze has expressed a contrasting view.

In an interview with Fox News’ Laura Ingraham, Matze said that while he does not like the idea of being perceived as a Twitter alternative, he believes that Parler’s practices are in better keeping with free speech and that Twitter is “acting more like publications rather than a community forum” — the same reasoning Trump used in arguing for the repeal of Section 230.

The future of Parler, Gab and other critics of Silicon Valley social networks

Torba said that Gab’s community guidelines will not change in the future, something that he claimed distinguished the site from its competition.

“This is where Gab stands out,” he said. “For years we have taken the principled position of defending speech that Apple, Google and other Big Tech companies wish to see censored. When Apple and Google demanded that Gab ban [First Amendment] protected speech, we refused to bend the knee and were banned from both app stores as punishment.”

Torba said that Gab’s users were committed to the company’s ideals, and that they did not need the help of “impotent members of Congress.”

“We have a loud majority of truth seekers speaking very freely who are infinitely more important and influential,” he said.

In the future, Matze said to Ingraham, Parler will focus on so-called “influencer marketing,”

“That’s really important right now — the influencers can convey the message better than individuals or the page as a whole,” he said.

In June, Matze offered $10,000 to a left-leaning pundit with at least 50.000 followers on Facebook or Twitter willing to join Parler. Finding no one, he raised the “progressive bounty” to $20,000.

To date, no one has accepted.

Section 230

Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case

Panelists disagreed over the merits of Section 230’s protections and the extent to which they apply.

Published

on

Screenshot of speakers at the Federalist Society webinar

WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

Continue Reading

Section 230

Supreme Court Seeks Biden Administration’s Input on Texas and Florida Social Media Laws

The court has not yet agreed to hear the cases, but multiple justices have commented on their importance.

Published

on

Photo of Solicitor General Elizabeth Prelogar courtesy of the U.S. Department of Justice

WASHINGTON, January 24, 2023 — The Supreme Court on Monday asked for the Joe Biden administration’s input on a pair of state laws that would prevent social media platforms from moderating content based on viewpoint.

The Republican-backed laws in Texas and Florida both stem from allegations that tech companies are censoring conservative speech. The Texas law would restrict platforms with at least 50 million users from removing or demonetizing content based on “viewpoint.” The Florida law places significant restrictions on platforms’ ability to remove any content posted by members of certain groups, including politicians.

Two trade groups — NetChoice and the Computer & Communications Industry Association — jointly challenged both laws, meeting with mixed results in appeals courts. They, alongside many tech companies, argue that the law would violate platforms’ First Amendment right to decide what speech to host.

Tech companies also warn that the laws would force them to disseminate objectionable and even dangerous content. In an emergency application to block the Texas law from going into effect in May, the trade groups wrote that such content could include “Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders,”

The Supreme Court has not yet agreed to hear the cases, but multiple justices have commented on the importance of the issue.

In response to the emergency application in May, Justice Samuel Alito wrote that the case involved “issues of great importance that will plainly merit this Court’s review.” However, he disagreed with the court’s decision to block the law pending review, writing that “whether applicants are likely to succeed under existing law is quite unclear.”

Monday’s request asking Solicitor General Elizabeth Prelogar to weigh in on the cases allows the court to put off the decision for another few months.

“It is crucial that the Supreme Court ultimately resolve this matter: it would be a dangerous precedent to let government insert itself into the decisions private companies make on what material to publish or disseminate online,” CCIA President Matt Schruers said in a statement. “The First Amendment protects both the right to speak and the right not to be compelled to speak, and we should not underestimate the consequences of giving government control over online speech in a democracy.”

The Supreme Court is still scheduled to hear two other major content moderation cases next month, which will decide whether Google and Twitter can be held liable for terrorist content hosted on their respective platforms.

Continue Reading

Section 230

Google Defends Section 230 in Supreme Court Terror Case

‘Section 230 is critical to enabling the digital sector’s efforts to respond to extremist[s],’ said a tech industry supporter.

Published

on

Photo of ISIS supporter by HatabKhurasani from Wikipedia

WASHINGTON, January 13, 2023 – The Supreme Court could trigger a cascade of internet-altering effects that will encourage the proliferation of offensive speech and the suppression of speech and create a “litigation minefield” if it decides Google is liable for the results of terrorist attacks by entities publishing on its YouTube platform, the search engine company argued Thursday.

The high court will hear the case of an America family whose daughter Reynaldo Gonzalez was killed in an ISIS terrorist attack in Paris in 2015. The family sued Google under the AntiTerrorism Act for the death, alleging YouTube participated as a publisher of ISIS recruitment videos when it hosted them and its algorithm shared them on the video platform.

But in a brief to the court on Thursday, Google said it is not liable for the content published by third parties on its website according to Section 230 of the Communications Decency Act, and that deciding otherwise would effectively gut platform protection provision and “upend the internet.”

Denying the provision’s protections for platforms “could have devastating spillover effects,” Google argued in the brief. “Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user. If plaintiffs could evade Section 230(c)(1) by targeting how websites sort content or trying to hold users liable for liking or sharing articles, the internet would devolve into a disorganized mess and a litigation minefield.”

It would also “perversely encourage both wide-ranging suppression of speech and the proliferation of more offensive speech,” it added in the brief. “Sites with the resources to take down objectionable content could become beholden to heckler’s vetoes, removing anything anyone found objectionable.

“Other sites, by contrast, could take the see-no-evil approach, disabling all filtering to avoid any inference of constructive knowledge of third-party content,” Google added. “Still other sites could vanish altogether.”

Google rejected the argument that recommendations by its algorithms conveys an “implicit message,” arguing that in such a world, “any organized display [as algorithms do] of content ‘implicitly’ recommends that content and could be actionable.”

The Supreme Court is also hearing a similar case simultaneously in Twitter v. Taamneh.

The Section 230 scrutiny has loomed large since former President Donald Trump was banned from social media platforms for allegedly inciting the Capitol Hill riots in January 2021. Trump and conservatives called for rules limited that protection in light of the suspensions and bans, while the Democrats have not shied away from introducing legislation limited the provision if certain content continued to flourish on those platforms.

Supreme Court Justice Clarence Thomas early last year issued a statement calling for a reexamination of tech platform immunity protections following a Texas Supreme Court decision that said Facebook was shielded from liability in a trafficking case.

Meanwhile, startups and internet associations have argued for the preservation of the provision.

“These cases underscore how important it is that digital services have the resources and the legal certainty to deal with dangerous content online,” Matt Schruers, president of the Computer and Communications Industry Association, said in a statement when the Supreme Court decided in October to hear the Gonzalez case.

“Section 230 is critical to enabling the digital sector’s efforts to respond to extremist and violent rhetoric online,” he added, “and these cases illustrate why it is essential that those efforts continue.”

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending