Connect with us

Section 230

Parler, Gab, and Section 230: Right-Leaning Social Networks Push Alternative to Twitter and Facebook

Published

on

Photo of Sen. Ted Cruz by Gage Skidmore used with permission

July 7, 2020 — Many conservatives have long accused popular social media platforms of discriminating against their ideas, but that feeling reached a new urgency in late May when Twitter flagged several of President Donald Trump’s tweets for touting unsubstantiated claims and glorifying violence.

The move sparked outrage from some on the right. Some Republican users said that Twitter had treated other similarly controversial posts differently.

Shortly after, Trump called for the revocation of Section 230 of the Communications Decency Act, which states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Don’t miss Broadband Breakfast Live Online on Wednesday, July 8, 2020 — “Section 230 in an Election Year: How Republicans and Democrats are Approaching Proposed Changes.” This event is part of a three-part event series,“Section 230: Separating Fact From Fiction.”

Put simply, the law protects online platforms from liability for the content their users post. In the event of an immediate threat or other illegal event on the website, the only person that can be held responsible for the crime is the individual user.

Shortly after he called for the statute’s repeal, Trump signed an executive order that attempted to restrict Section 230’s protections by potentially withholding federal funds from tech companies that engage in “viewpoint discrimination, deception to consumers, or other bad practices.”

The order was met with skepticism from many digital policy experts.

Michael Petricone, senior vice president of government affairs for the Consumer Technology Association, said it was “not only ill-considered, it is unconstitutional.”

“For the past few months, people… have relied on online platforms to connect and communicate,” he said. “It was Section 230 and the free speech protections enjoyed by online platforms that enabled their success and subsequently, their ability to support struggling Americans during the pandemic.”

The legal footing on which the order stands is unclear, but frustration at alleged incidents of bias have continued to grow.

Parler’s ‘free speech’ community standards

Several conservative commentators called for a conservative exodus from Twitter.

In its stead, some have migrated to platforms like Parler, which claims to offer users an escape from alleged anti-conservative bias.

Public figures like Sen. Ted Cruz, R-Texas, and Fox News anchor Sean Hannity made Parler accounts that garnered hundreds of thousands of followers in mere days. Some, like Sen. Mike Lee, R-Utah, had previously made accounts but only recently resumed use of them.

Big tech companies like Facebook and Twitter “have an unparalleled ability to shape what Americans see and hear and ultimately think,” Cruz wrote in a post to the site. “And they use that power to silence conservatives and to promote their radical leftwing agenda.”

Parler aims to be “a non-biased free speech driven entity,” and provides examples of Supreme Court outcomes and Federal Communications Commission media regulations to justify much of what they deem off-limits.

However, the platform still removes posts and even users that run afoul of content moderation policies, including legally-protected content like crass speech, pornography and spam — all behaviors that are permitted on Twitter, provided that pornography is tagged as “sensitive”.

In a post last week, Parler CEO John Matze warned users that such actions would not be tolerated on the website, and that if a user were in doubt about what is acceptable to post, he should “ask [himself] if [he] would say it on the streets of New York or national television.”

Further, the platform’s level of unacceptable content has caused the company to ask members of its userbase to sift through the potentially violent, pornographic, incestuous, bestial or otherwise undesirable content two hours a day without compensation, promising future payment of an unspecified amount.

What Parler does clearly permit is virally-shared false content. QAnon conspiracy theories and phraseology are common on the site, as are incendiary claims directed at political opponents.

One such post claims that Rep. Ilhan Omar, D-M.N., called for “all white men [to] be put in chains” in a post on Twitter. No such tweet exists.

One can find myriad examples of harmful behavior on mainstream platforms like Twitter and Facebook as well. But ultimately, both their and Parler’s ability to decide which legal content to leave untouched is protected by the very piece of legislation that the president, whom many of Parler’s users and investors support, is trying to repeal. 

Gab’s looser guidelines on content

Other alternative social media networks, like Gab, allow far more content on their platforms than Parler. On Gab, users follow loose content guidelines that permit almost anything that is not copyright infringement, impersonation, unlawful threatening, and obscene or pornographic, although nudity for “protest or for educational/medical reasons” is permitted.

Because of the lax restrictions, CEO Andrew Torba said, “Gab is an online refuge for anyone who wishes to speak freely and securely away from the tyranny of Silicon Valley.”

However, such a refuge has led to communities centered around ethnic hatred that are generally banned on Parler, as well as the more “mainstream” and popular social networks Twitter and Facebook. In 2018, Gab user Robert Bowers sparked notoriety for the online forum when he posted on Gab that he was “going in,” before entering a synagogue to murder 11 people and leave several others injured.

Before the shooting, Bowers’ account was replete with anti-Semitic content. He railed against “Zionist Operated Governments” and the Hebrews Immigrant Aid Society, which assists refugees, and warnedof a “kike infestation.” The posts and Bowers’ account were removed following the shooting.

Torba claimed that journalists who accuse Gab of being a safe haven for white supremacists and radicals are “Marxist propagandists and proven liars,” and that there are numerous healthy groups on the site.

He also said that in the wake of Twitter’s response to Trump’s tweets, the platform has seen a spike in membership to over four million users.

“We saw an increase of 100,000 new daily active users join the Gab community in the past week alone,” he said. “In June, we brought on 200,000 new and returning users after the President’s tweets started getting ‘fact-checked’ by Twitter.”

Despite Gab’s history, its community guidelines are closer to reflecting its stated vision and seemingly less arbitrarily enforced than Parler’s. Torba said that because Parler is available in Apple’s App Store and Google Play (both of which have barred Gab), they are required to enforce not only their own community guidelines but also the guidelines of their providers.

“[This] is likely why Parler is already mass banning users,” he said. “From what I’ve seen, Twitter has more free speech than Parler does.”

In an interview with Pastor Rick Wiles of TruNews (who has warned of a “brown invasion” of the United States and referred to the impeachment of President Donald Trump as a “Jew coup”), Torba said he refuses to “bend the knee” to those who wish to see Gab fail, and that he sees his work as eternally important.

“Hey, if Christ can get up on that cross,” he said, “I can pick up the cross and do what I do. That’s the way I see it.”

Gab CEO supports Section 230

Torba is vocally pro-Section 230. In a Gab News post, he championed the right of private companies to “moderate their platform as they see fit.”

“They can ban anyone for any reason,” he continued. “They can have a rule that says no one can post videos of their cat anymore if they so choose and they can certainly decide to.”

However, Torba also contended that Section 230’s protections do not extend to Twitter or Facebook’s fact-checking practices.

“When Big Tech platforms “fact-check” posts or “editorialize” content, Section 230 does not apply to that speech,” he said. “That speech is them speaking, not a user. Section 230 does not prevent them from being held liable for their own speech.”

Other right-leaning figures, like David Harsanyi of the National Review, have argued that while Twitter’s decision to mark Trump’s tweets as false or glorifying violence will only fuel accusations of bias, it is within the platform’s legal right to do so.

“No American, not even the president, has an inherent right to a social media account,” he said. “Tech companies such as Facebook and Twitter are free to ban any user they see fit. They’re free to accuse Donald Trump — and only Trump, if they see fit — of being a liar, even if they shouldn’t.”

Parler’s Matze has expressed a contrasting view.

In an interview with Fox News’ Laura Ingraham, Matze said that while he does not like the idea of being perceived as a Twitter alternative, he believes that Parler’s practices are in better keeping with free speech and that Twitter is “acting more like publications rather than a community forum” — the same reasoning Trump used in arguing for the repeal of Section 230.

The future of Parler, Gab and other critics of Silicon Valley social networks

Torba said that Gab’s community guidelines will not change in the future, something that he claimed distinguished the site from its competition.

“This is where Gab stands out,” he said. “For years we have taken the principled position of defending speech that Apple, Google and other Big Tech companies wish to see censored. When Apple and Google demanded that Gab ban [First Amendment] protected speech, we refused to bend the knee and were banned from both app stores as punishment.”

Torba said that Gab’s users were committed to the company’s ideals, and that they did not need the help of “impotent members of Congress.”

“We have a loud majority of truth seekers speaking very freely who are infinitely more important and influential,” he said.

In the future, Matze said to Ingraham, Parler will focus on so-called “influencer marketing,”

“That’s really important right now — the influencers can convey the message better than individuals or the page as a whole,” he said.

In June, Matze offered $10,000 to a left-leaning pundit with at least 50.000 followers on Facebook or Twitter willing to join Parler. Finding no one, he raised the “progressive bounty” to $20,000.

To date, no one has accepted.

Section 230

Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230

The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.

Published

on

Rep. Anna Eshoo, D-California

WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.

The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.

The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.

Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.

Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.

“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”

The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

From Haugen testimony to legislation

Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.

“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.

Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.

Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.

Haugen’s testimony versus Zuckerberg’s Section 230 vision

In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”

This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.

Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”

Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.

Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.

But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.

An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.

Where the parties agree

Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.

Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.

Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.

With reporting from Riley Steward

Continue Reading

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Section 230

Judge Rules Exemption Exists in Section 230 for Twitter FOSTA Case

Latest lawsuit illustrates the increasing fragility of Section 230 legal protections.

Published

on

Twitter CEO Jack Dorsey.

August 24, 2021—A California court has allowed a lawsuit to commence against Twitter from two victims of sexual trafficking, who allege the social media company initially refused to remove content that exploited the underaged plaintiffs – and then went viral.

The anonymous plaintiffs allege that they were manipulated into making pornographic videos of themselves through another social media app, Snapchat, after which the videos were posted on Twitter. When the plaintiffs asked Twitter to take down the posts, it refused, and it was only after the Department of Homeland Security got involved that the social media company complied.

At issue in the case is whether Twitter had any obligation to remove the content at least “immediately” under Section 230 of the Communications Decency Act, which provides legal liability protections for the content the platforms’ users post.

Court’s finding

The court ruled Thursday that the case should proceed after finding that Twitter knowingly knew such content was on the site, had to have known it was sex trafficking, and refused to do something about it immediately.

“The Court finds that these allegations are sufficient to allege an ongoing pattern of conduct amounting to a tacit agreement with the perpetrators in this case to allow them to post videos and photographs it knew or should have known were related to sex trafficking without blocking their accounts or the Videos,” the decision read.

“In sum, the Court finds that Plaintiffs have stated a claim for civil liability under the [Trafficking Victims Protection Reauthorization Act] on the basis of beneficiary liability and that the claim falls within the exemption to Section 230 immunity created by FOSTA.”

The Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act that became the package law SESTA-FOSTA was passed in 2018 and amended immunity claims under Section 230 to exclude enforcement of federal or state sex trafficking laws from intermediary protections.

The court dismissed other claims against the company made by the plaintiffs, but met the relatively low bar to move the case forward.

The arguments

The plaintiffs allege that Twitter violated the TVPRA because it allegedly knew about the videos, benefitted from them and did nothing to address the problem before it went viral.

Twitter argued that FOSTA, as applied to the CDA, only narrowly applies to websites that are “knowingly assisting and profiting from reprehensible crimes;” the plaintiffs allegedly fail to show that the company “affirmatively participated” in such crimes; and the company cannot be held liable “simply because it did not take the videos down immediately.”

Experts asserted companies may hesitate to bring Section 230 defense in court

The case is yet another instance of U.S. courts increasingly poking holes in arguments brought by technology companies that suggests they cannot be liable for content on their platforms, per Section 230, which is currently the subject of hot debate in Washington about whether to reform it or completely abolish it.

A number of state judges have ruled against Amazon, for example, and its Section 230 defense in a number of case-specific instances in Texas and California. Experts on a panel in May said if courts keep ruling against the defense, there may be a deluge of lawsuits to come against companies.

And last month, citing some of these cases, lawyers argued that big tech companies may begin to shy away from bringing the 230 defense to court in fear of awakening lawmakers to changing legal views on the provision that could ignite its reform.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending