Connect with us

Section 230

Social Media Needs to Be Held Accountable to a Higher Standard Than No Standard

Published

on

Photo of Sara-Jayne Terp from Atlantic Council

February 8, 2021— The spread of disinformation and misinformation can be controlled if the same rules on transparency required of the broadcast industry are applied to social media, an Atlantic Council webinar heard Wednesday.

That includes making changes to Section 230 of the Communications Decency Act governing liability of internet intermediaries to include a requirement that social media companies make clear who paid for ads that are displayed, said Pablo Breuer, co-author of the Adversarial Misinformation and Influence Tactics and Techniques framework.

Breuer’s framework, which was co-authored with Sara-Jayne Terp, seeks to identify the best means to detect and discuss what Terp referred to as “disinformation behaviors.”

The webinar last Wednesday focused on the critical issue of misinformation and disinformation and the roles and responsibilities of social media, the government and citizens.

Breuer noted that just four years ago, the attitude surrounding misinformation and disinformation campaigns was very different.

“When Sara-Jayne and I started talking about this, people thought we were crazy—they thought there was no disinformation problem,” he said. “Now you see it covered on the nightly news.”

When asked why the issue has only come to the forefront of society within the last couple of years, Breuer pointed out that in the past, disseminating information required a lot of capital. With the advent of social media, that was no longer the case.

Pablo Breuer

Pablo Breuer

“We’ve democratized the ability to reach a mass audience. Now we live in a world where an entertainer has twice the number of followers as the President of the United States,” said Breuer. ”They don’t have to clear their message with anyone—they can say something completely false.”

For a long time, social media was a largely-unregulated wild west of commentary, news and opinions.

But then the data-harvesting exploits of firms like Cambridge Analytica exposed how information was used to mold citizens’ thinking on issues that impacted political elections around the world began to put things into focus.

We may be approaching the end of non-regulation, as the banning of former President Donald Trump and other right-wing political commentators from Twitter and other social media platforms may lead to renewed scrutiny on the power of tech companies.

Breuer conceded that while more attention being focused on the issue is a step in the right direction, there are still huge dangers associated with the spread of fraudulent information and the many channels at the hands of malevolent actors..

Following the banning of the aforementioned figures, more of that base gravitated toward other more receptive applications, including Parler and Gab.

Counter-measures to social media disinformation?

Terp and Breuer compiled a list of what they regard as effective countermeasures to mitigate misinformation.Terpnoted that many people have been unknowingly co-opted as “unwitting agents.” In addition to being unwitting, they are not necessarily being influenced by external entities.

“Disinformation is coming from inside the house. What we are seeing is this move past, ‘the Russians are coming,’ to a more honest discussion about financial motivations, political motivations and reputational drivers of misinformation.

Terp also expressed that there is a strong relationship between privacy, democracy, and disinformation. She explained how greater consumer privacy reduces the level of targeting by outside entities in terms of the content a consumer is exposed to.

In the aftermath of Facebook’s move to wholly integrate Whatsapp into the social media ecosystem, for example, Signal, a privacy-by-design messaging app, saw its adoption skyrocket. End-to-end encryption messaging has also been a problem for law enforcement, they say, because it inhibits their ability to access messages of criminals.

Terp described disinformation as merchandise, and that one of the primary goals of anyone trying to curb its spread should be to take money out of it. According to Terp, countermeasure efforts deployed by social media platforms designed to make disinformation less profitable have had a mitigating effect.

Tackling bad behavior, not combatting people

In her conclusion, Terp made it clear that the only way to make policies that are effective at combatting the spread of disinformation is to tackle the behavior and not people. More needs to be done to spot behaviors early so that social media and government  can engage in more preventative action, she said, rather than simply reacting to things as they happen.

Breuer offered some advice for the average person: He encouraged the audience to engage with those they disagree with, and to avoid trapping themselves in a virtual echo chamber.

He added the government needs to reexamine Section 230 and  be more proactive in crafting policy to address the demands of modern technology.

As a child of American parents working abroad, Reporter Ben Kahn was raised as a third culture kid, growing up in five different countries, including the U.S.. He is a recent graduate of the University of Baltimore, where he majored in Policy, Politics, and International Affairs. He enjoys learning about foreign languages and cultures and can now speak poorly in more than one language.

Section 230

Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230

The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.

Published

on

Rep. Anna Eshoo, D-California

WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.

The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.

The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.

Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.

Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.

“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”

The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

From Haugen testimony to legislation

Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.

“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.

Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.

Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.

Haugen’s testimony versus Zuckerberg’s Section 230 vision

In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”

This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.

Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”

Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.

Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.

But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.

An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.

Where the parties agree

Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.

Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.

Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.

With reporting from Riley Steward

Continue Reading

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Section 230

Judge Rules Exemption Exists in Section 230 for Twitter FOSTA Case

Latest lawsuit illustrates the increasing fragility of Section 230 legal protections.

Published

on

Twitter CEO Jack Dorsey.

August 24, 2021—A California court has allowed a lawsuit to commence against Twitter from two victims of sexual trafficking, who allege the social media company initially refused to remove content that exploited the underaged plaintiffs – and then went viral.

The anonymous plaintiffs allege that they were manipulated into making pornographic videos of themselves through another social media app, Snapchat, after which the videos were posted on Twitter. When the plaintiffs asked Twitter to take down the posts, it refused, and it was only after the Department of Homeland Security got involved that the social media company complied.

At issue in the case is whether Twitter had any obligation to remove the content at least “immediately” under Section 230 of the Communications Decency Act, which provides legal liability protections for the content the platforms’ users post.

Court’s finding

The court ruled Thursday that the case should proceed after finding that Twitter knowingly knew such content was on the site, had to have known it was sex trafficking, and refused to do something about it immediately.

“The Court finds that these allegations are sufficient to allege an ongoing pattern of conduct amounting to a tacit agreement with the perpetrators in this case to allow them to post videos and photographs it knew or should have known were related to sex trafficking without blocking their accounts or the Videos,” the decision read.

“In sum, the Court finds that Plaintiffs have stated a claim for civil liability under the [Trafficking Victims Protection Reauthorization Act] on the basis of beneficiary liability and that the claim falls within the exemption to Section 230 immunity created by FOSTA.”

The Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act that became the package law SESTA-FOSTA was passed in 2018 and amended immunity claims under Section 230 to exclude enforcement of federal or state sex trafficking laws from intermediary protections.

The court dismissed other claims against the company made by the plaintiffs, but met the relatively low bar to move the case forward.

The arguments

The plaintiffs allege that Twitter violated the TVPRA because it allegedly knew about the videos, benefitted from them and did nothing to address the problem before it went viral.

Twitter argued that FOSTA, as applied to the CDA, only narrowly applies to websites that are “knowingly assisting and profiting from reprehensible crimes;” the plaintiffs allegedly fail to show that the company “affirmatively participated” in such crimes; and the company cannot be held liable “simply because it did not take the videos down immediately.”

Experts asserted companies may hesitate to bring Section 230 defense in court

The case is yet another instance of U.S. courts increasingly poking holes in arguments brought by technology companies that suggests they cannot be liable for content on their platforms, per Section 230, which is currently the subject of hot debate in Washington about whether to reform it or completely abolish it.

A number of state judges have ruled against Amazon, for example, and its Section 230 defense in a number of case-specific instances in Texas and California. Experts on a panel in May said if courts keep ruling against the defense, there may be a deluge of lawsuits to come against companies.

And last month, citing some of these cases, lawyers argued that big tech companies may begin to shy away from bringing the 230 defense to court in fear of awakening lawmakers to changing legal views on the provision that could ignite its reform.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending