Connect with us

Free Speech

Telecom Companies Need To Challenge Governments Over Internet Shutdowns: Advocacy Groups

Published

on

Screenshot from the webinar

March 13, 2021 – Internet shutdowns by governments around the globe are impacting their connectivity-dependent segments, including education and business, and citizens should pressure telecom companies to take action against the authors of those blackouts, experts said Tuesday.

Internet blackouts in countries like Myanmar, India, Iran, China, Hong Kong, Russia, Turkey, and Vietnam has put into focus how the practice of silencing dissent by cutting off social tools has become normalized and is depressing critical educational and business tools.

That’s according to panelists hosted by the Aspen Institute, which were tasked Tuesday with discussing the impacts of the practice.

Adrian Shahbaz, director of non-profit democracy advocate Freedom House, said his agency has tracked a ten-year decline in internet freedom across categories including obstacles to accessing the internet, limits on content, and as violations to user rights.

He said restrictions on social media networks, like Facebook and Twitter, go beyond social interaction, as it has significant collateral implications for those who use those tools to access educational materials and to engage in business with customers and suppliers, among other critical functions.

Sophie Schmidt, founder and CEO of global tech reporting publication Rest of World, said shutdowns are increasingly happening in places that tech literacy is a challenge, adding journalists have been a crucial part of increasing knowledge in these places to improve understanding of what they are up against it.

The more widespread the shutdowns happen, however, the more normal it is, she added.

Felicia Anthonio, a campaigner for digital advocacy non-profit Access Now, said one way to potentially combat shutdowns is for the subject population to pressure the telecom companies to fight against governments on the basis that shutdowns are breaches of contract.

Such a precedent exists: in India, service providers have taken the government to court for violating their terms of service for forcing them to switch off.

Section 230

Tech Groups, Free Expression Advocates Support Twitter in Landmark Content Moderation Case

The Supreme Court’s decision could dramatically alter the content moderation landscape.

Published

on

Photo of Supreme Court Justice Clarence Thomas courtesy of Stetson University

WASHINGTON, December 8, 2022 — Holding tech companies liable for the presence of terrorist content on their platforms risks substantially limiting their ability to effectively moderate content without overly restricting speech, according to several industry associations and civil rights organizations.

The Computer & Communications Industry Association, along with seven other tech associations, filed an amicus brief Tuesday emphasizing the vast amount of online content generated on a daily basis and the existing efforts of tech companies to remove harmful content.

A separate coalition of organizations, including the Electronic Frontier Foundation and the Center for Democracy & Technology, also filed an amicus brief.

The briefs were filed in support of Twitter as the Supreme Court prepares to hear Twitter v. Taamneh in 2023, alongside the similar case Gonzalez v. Google. The cases, brought by relatives of ISIS attack victims, argue that social media platforms allow groups like ISIS to publish terrorist content, recruit new operatives and coordinate attacks.

Both cases were initially dismissed, but an appeals court in June 2021 overturned the Taamneh dismissal, holding that the case adequately asserted its claim that tech platforms could be held liable for aiding acts of terrorism. The Supreme Court will now decide whether an online service can be held liable for “knowingly” aiding terrorism if it could have taken more aggressive steps to prevent such use of its platform.

The Taamneh case hinges on the Anti-Terrorism Act, which says that liability for terrorist attacks can be placed on “any person who aids and abets, by knowingly providing substantial assistance.” The case alleges that Twitter did this by allowing terrorists to utilize its communications infrastructure while knowing that such use was occurring.

Gonzalez is more directly focused on Section 230, a provision under the Communications Decency Act that shields platforms from liability for the content their users publish. The case looks at YouTube’s targeted algorithmic recommendations and the amplification of terrorist content, arguing that online platforms should not be protected by Section 230 immunity when they engage in such actions.

Supreme Court Justice Clarence Thomas wrote in 2020 that the “sweeping immunity” granted by current interpretations of Section 230 could have serious negative consequences, and suggested that the court consider narrowing the statute in a future case.

Experts have long warned that removing Section 230 could have the unintended impact of dramatically increasing the amount of content removed from online platforms, as liability concerns will incentivize companies to err on the side of over-moderation.

Without some form of liability protection, platforms “would be likely to use necessarily blunt content moderation tools to over-restrict speech or to impose blanket bans on certain topics, speakers, or specific types of content,” the EFF and other civil rights organizations argued.

Platforms are already self-motivated to remove harmful content because failing to do so can risk their user base, CCIA and the other tech organizations said.

There is an immense amount of harmful content to be found on online and moderating it is a careful, costly and iterative process, the CCIA brief said, adding that “mistakes and difficult judgement calls will be made given the vast amounts of expression online.”

Continue Reading

Free Speech

Noted Classical Liberal Legal Scholar Countenances Regulation of Social Media

Georgetown University professor Randy Barnett said that the ability to post on social media might be a civil right.

Published

on

Photo of Randy E. Barnett, a legal scholar and constitutional law professor at Georgetown University, obtained from Flickr.

WASHINGTON, October 21, 2022 – Classical liberal political theory should acknowledge the need for government to regulate certain privately owned businesses that operate in the public sphere, said Randy Barnett, a legal scholar and constitutional law professor at Georgetown University.

Barnet’s argument, made at a Federalist Society web panel discussing on the regulation of social media platforms Thursday, is significant in that even a well-known libertarian scholar is putting forth a plausible case to regulate speech on such technology platforms.

Between fully public and fully private entities, there is a middle category of privately-owned entities that operate in the public sphere, such as public accommodations and common carriers, Barnett said.

The Civil Rights Act of 1875, for instance, regulated “privately owned, public institutions such as railroads, inns, and even places of public amusement such as opera halls,” he explained. Barnett suggested that regulation of public accommodations can protect an individual’s “civil rights.”

“Civil rights are the rights that one gets when one leaves the state of nature and enters into civil society, and these are the rights that are basically the government protections of our preexisting natural rights, but they’re also more than, they are privileges you have as citizens,” Barnett argued. “You also have a civil right to be able to travel throughout the country and to enter into places of public accommodation as an equal to your fellow citizens,” he added.

Barnett said he wasn’t sure if social-media platforms should be considered public accommodations, however. “Are Facebook and Twitter in or are they out” of the public-accommodations category, he mused. “That’s the thing about which I think reasonable people can still disagree,” he said.

Whether social media companies have First Amendment right to moderate content on their platforms had been seen as a well-established view about free speech in the United States. With increasing criticism of the tech sector from the Trump-infused element of the political right, the issue has now become a more open question.

In 2021, to combat alleged discrimination against speech by conservatives, Texas and Florida have each passed laws barring platforms from engaging in various kinds of viewpoint-based content-moderation.

The 11th U.S. Circuit Court of Appeals largely struck down Florida’s law in May, but the Fifth Circuit upheld the Texas statute in September. The Fifth Circuit has stayed the decision pending a likely Supreme Court review.

Continue Reading

Free Speech

Panel Hears Opposing Views on Content Moderation Debate

Some agreed there is egregious information that should be downranked on search platforms.

Published

on

Screenshot of Renee DiResta, research manager at Stanford Internet Observatory.

WASHINGTON, September 14, 2022 – Panelists wrangled over how technology platforms should handle content moderation at an event hosted by the Lincoln Network Friday, with one arguing that search engines should neutralize misinformation that cause direct, “tangible” harms and another advocating an online content moderation standard that doesn’t discriminate on viewpoints.

Debate about what to do with certain content on technology platforms has picked up steam since former President Donald Trump was removed last year from platforms including Facebook and Twitter for allegedly inciting the January 6, 2021, storming of the Capitol.

Search engines generally moderate content algorithmically, prioritizing certain results over others. Most engines, like Google, prioritize results from institutions generally considered to be credible, such as universities and government agencies.

That can be a good thing, said Renee DiResta, research manager at Stanford Internet Observatory. If search engines allow scams or medical misinformation to headline search results, she argued, “tangible” material or physical harms will result.

The internet pioneered communications from “one-to-many” broadcast media – e.g., television and radio – to a “many-to-many” model, said DiResta. She argued that “many-to-many” interactions create social frictions and make possible the formation of social media mobs.

At the beginning of the year, Georgia Republic representative Marjorie Taylor Greene was permanently removed from Twitter for allegedly spreading Covid-19 misinformation, the same reason Kentucky Senator Rand Paul was removed from Alphabet Inc.’s YouTube.

Lincoln Network senior fellow Antonio Martinez endorsed a more permissive content moderation strategy that – excluding content that incites imminent, lawless action – is tolerant of heterodox speech. “To think that we can epistemologically or even technically go in and establish capital-T Truth at scale is impossible,” he said.

Trump has said to be committed to a platform of open speech with the creation of his social media website Truth Social. Other platforms, such as social media site Parler and video-sharing website Rumble, have purported to allow more speech than the incumbents. SpaceX CEO Elon Musk previously committed to buying Twitter because of its policies prohibiting certain speech, though he now wants out of that commitment.

Alex Feerst, CEO of digital content curator Murmuration Labs, said that free-speech aphorisms – such as, “The cure for bad speech is more speech” – may no longer hold true given the volume of speech enabled by the internet.

Continue Reading

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Broadband Breakfast Research Partner

Trending