Connect with us

Free Speech

Legal Digital Framework Must Be Created For Content Moderation, Says Head of European Court of Human Rights

Published

on

Screenshot of judge Róbert Spanó from German Marshall Fund of the United States event

March 16, 2021—The president of the European Court of Human Rights is recommending an autonomous legal body that oversees a digital framework for content moderation.

Róbert Spanó said Tuesday, at a talk hosted by the German Marshall Fund of the United States, that the framework would serve as a digital version of legal and due process principles that would be played out over the internet, unconstrained by the borders that generally restrain traditional legal systems, and ensuring tech companies abide are kept in-line to suppress hate and content that incites violence.

Spanó, who is a judge and has served on the European court since 2013, was pressing the importance of content moderation in the digital age. This week, the South by Southwest conference has played host to discussions about reforming Section 230 and content moderation.

“What the internet does is it creates an environment where certain interactions are occurring outside the classical paradigm of human interaction being regulated by governmental power,” Spanó said. He explained that even though this digital environment is sustained by private actors instead of the government, it should not be immune to classical rule of law and due process principles.

Spanó said for this environment to sustain itself, platforms in the digital space must craft a framework that emulates those principles.

Facebook, for its part, has established an autonomous Oversight Board to provide it with recommendations on what it should do about certain content.

But tech approaches to content moderation have been largely patchwork: Twitter and Facebook use a blend of artificial intelligence and human moderators, but Patreon, a website that facilitates payments to creators, uses only human moderators. And their approach to moderation can be radically divergent.

A digital legal system

Spanó proposed a system of governance that exists only in the digital realm. While this system would not operate in the traditional way where courts are limited by physical jurisdiction, it would still promote classical legal principles, albeit in an expedited fashion to match the breakneck speed that discourse occurs online.

He added that this system could be adjudicated by judges that are specially trained to operate in this digital realm. “I think it is a…duty of judges to re-educate ourselves about issues that arise,” Spanó said.

He cautioned that a failure to establish a framework with these goals would inevitably result in either threats to a citizens’ autonomy or the rise of arbitrary decisions at the hands of those in power—whether they be private or otherwise.

Tech companies explain their moderation philosophies

On the same day Judge Spanó gave his talk, South by Southwest Online hosted a panel of experts representing the Oversight Board, Twitter, and Patreon, where they addressed their respective roles, concerns, and goals. They discussed scenarios ranging from the simple mislabeling of what they consider to be age-restricted content, all the way to violent extremism and hate speech.

Rachel Wolbers is the public policy manager for Oversite Board. Like Twitter’s hybrid model, Oversite Board is made up exclusively of human board members who adjudicate content moderation decisions that may have first been identified by AI on Facebook.

For the Oversight Board to address an issue, it must first be identified by Facebook as a potential rule violation—whether that violation is identified by AI or a human moderator. After an issue is settled by Facebook, the alleged violator can choose to either accept Facebook’s ruling, or ultimately petition Oversite Board to look over the post in question.

The Oversite Board only chooses to look more closely at a handful of cases, and so far has only made seven decisions. Out of the seven cases it has analyzed, it has upheld Facebook’s ruling on a single occasion regarding an ethnic slur directed towards Azerbaijanis.

Wolbers offered that in the future, if other platforms were interested in utilizing Oversight Boards services, Oversight Board would be receptive to the idea. Because the Oversite Board is still in its infancy, its role in the broader digital landscape remains to be seen, but it is perhaps a precursor to wider frameworks that Judge Spanó alluded to.

Section 230

Tech Groups, Free Expression Advocates Support Twitter in Landmark Content Moderation Case

The Supreme Court’s decision could dramatically alter the content moderation landscape.

Published

on

Photo of Supreme Court Justice Clarence Thomas courtesy of Stetson University

WASHINGTON, December 8, 2022 — Holding tech companies liable for the presence of terrorist content on their platforms risks substantially limiting their ability to effectively moderate content without overly restricting speech, according to several industry associations and civil rights organizations.

The Computer & Communications Industry Association, along with seven other tech associations, filed an amicus brief Tuesday emphasizing the vast amount of online content generated on a daily basis and the existing efforts of tech companies to remove harmful content.

A separate coalition of organizations, including the Electronic Frontier Foundation and the Center for Democracy & Technology, also filed an amicus brief.

The briefs were filed in support of Twitter as the Supreme Court prepares to hear Twitter v. Taamneh in 2023, alongside the similar case Gonzalez v. Google. The cases, brought by relatives of ISIS attack victims, argue that social media platforms allow groups like ISIS to publish terrorist content, recruit new operatives and coordinate attacks.

Both cases were initially dismissed, but an appeals court in June 2021 overturned the Taamneh dismissal, holding that the case adequately asserted its claim that tech platforms could be held liable for aiding acts of terrorism. The Supreme Court will now decide whether an online service can be held liable for “knowingly” aiding terrorism if it could have taken more aggressive steps to prevent such use of its platform.

The Taamneh case hinges on the Anti-Terrorism Act, which says that liability for terrorist attacks can be placed on “any person who aids and abets, by knowingly providing substantial assistance.” The case alleges that Twitter did this by allowing terrorists to utilize its communications infrastructure while knowing that such use was occurring.

Gonzalez is more directly focused on Section 230, a provision under the Communications Decency Act that shields platforms from liability for the content their users publish. The case looks at YouTube’s targeted algorithmic recommendations and the amplification of terrorist content, arguing that online platforms should not be protected by Section 230 immunity when they engage in such actions.

Supreme Court Justice Clarence Thomas wrote in 2020 that the “sweeping immunity” granted by current interpretations of Section 230 could have serious negative consequences, and suggested that the court consider narrowing the statute in a future case.

Experts have long warned that removing Section 230 could have the unintended impact of dramatically increasing the amount of content removed from online platforms, as liability concerns will incentivize companies to err on the side of over-moderation.

Without some form of liability protection, platforms “would be likely to use necessarily blunt content moderation tools to over-restrict speech or to impose blanket bans on certain topics, speakers, or specific types of content,” the EFF and other civil rights organizations argued.

Platforms are already self-motivated to remove harmful content because failing to do so can risk their user base, CCIA and the other tech organizations said.

There is an immense amount of harmful content to be found on online and moderating it is a careful, costly and iterative process, the CCIA brief said, adding that “mistakes and difficult judgement calls will be made given the vast amounts of expression online.”

Continue Reading

Free Speech

Noted Classical Liberal Legal Scholar Countenances Regulation of Social Media

Georgetown University professor Randy Barnett said that the ability to post on social media might be a civil right.

Published

on

Photo of Randy E. Barnett, a legal scholar and constitutional law professor at Georgetown University, obtained from Flickr.

WASHINGTON, October 21, 2022 – Classical liberal political theory should acknowledge the need for government to regulate certain privately owned businesses that operate in the public sphere, said Randy Barnett, a legal scholar and constitutional law professor at Georgetown University.

Barnet’s argument, made at a Federalist Society web panel discussing on the regulation of social media platforms Thursday, is significant in that even a well-known libertarian scholar is putting forth a plausible case to regulate speech on such technology platforms.

Between fully public and fully private entities, there is a middle category of privately-owned entities that operate in the public sphere, such as public accommodations and common carriers, Barnett said.

The Civil Rights Act of 1875, for instance, regulated “privately owned, public institutions such as railroads, inns, and even places of public amusement such as opera halls,” he explained. Barnett suggested that regulation of public accommodations can protect an individual’s “civil rights.”

“Civil rights are the rights that one gets when one leaves the state of nature and enters into civil society, and these are the rights that are basically the government protections of our preexisting natural rights, but they’re also more than, they are privileges you have as citizens,” Barnett argued. “You also have a civil right to be able to travel throughout the country and to enter into places of public accommodation as an equal to your fellow citizens,” he added.

Barnett said he wasn’t sure if social-media platforms should be considered public accommodations, however. “Are Facebook and Twitter in or are they out” of the public-accommodations category, he mused. “That’s the thing about which I think reasonable people can still disagree,” he said.

Whether social media companies have First Amendment right to moderate content on their platforms had been seen as a well-established view about free speech in the United States. With increasing criticism of the tech sector from the Trump-infused element of the political right, the issue has now become a more open question.

In 2021, to combat alleged discrimination against speech by conservatives, Texas and Florida have each passed laws barring platforms from engaging in various kinds of viewpoint-based content-moderation.

The 11th U.S. Circuit Court of Appeals largely struck down Florida’s law in May, but the Fifth Circuit upheld the Texas statute in September. The Fifth Circuit has stayed the decision pending a likely Supreme Court review.

Continue Reading

Free Speech

Panel Hears Opposing Views on Content Moderation Debate

Some agreed there is egregious information that should be downranked on search platforms.

Published

on

Screenshot of Renee DiResta, research manager at Stanford Internet Observatory.

WASHINGTON, September 14, 2022 – Panelists wrangled over how technology platforms should handle content moderation at an event hosted by the Lincoln Network Friday, with one arguing that search engines should neutralize misinformation that cause direct, “tangible” harms and another advocating an online content moderation standard that doesn’t discriminate on viewpoints.

Debate about what to do with certain content on technology platforms has picked up steam since former President Donald Trump was removed last year from platforms including Facebook and Twitter for allegedly inciting the January 6, 2021, storming of the Capitol.

Search engines generally moderate content algorithmically, prioritizing certain results over others. Most engines, like Google, prioritize results from institutions generally considered to be credible, such as universities and government agencies.

That can be a good thing, said Renee DiResta, research manager at Stanford Internet Observatory. If search engines allow scams or medical misinformation to headline search results, she argued, “tangible” material or physical harms will result.

The internet pioneered communications from “one-to-many” broadcast media – e.g., television and radio – to a “many-to-many” model, said DiResta. She argued that “many-to-many” interactions create social frictions and make possible the formation of social media mobs.

At the beginning of the year, Georgia Republic representative Marjorie Taylor Greene was permanently removed from Twitter for allegedly spreading Covid-19 misinformation, the same reason Kentucky Senator Rand Paul was removed from Alphabet Inc.’s YouTube.

Lincoln Network senior fellow Antonio Martinez endorsed a more permissive content moderation strategy that – excluding content that incites imminent, lawless action – is tolerant of heterodox speech. “To think that we can epistemologically or even technically go in and establish capital-T Truth at scale is impossible,” he said.

Trump has said to be committed to a platform of open speech with the creation of his social media website Truth Social. Other platforms, such as social media site Parler and video-sharing website Rumble, have purported to allow more speech than the incumbents. SpaceX CEO Elon Musk previously committed to buying Twitter because of its policies prohibiting certain speech, though he now wants out of that commitment.

Alex Feerst, CEO of digital content curator Murmuration Labs, said that free-speech aphorisms – such as, “The cure for bad speech is more speech” – may no longer hold true given the volume of speech enabled by the internet.

Continue Reading

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Broadband Breakfast Research Partner

Trending