Legal Digital Framework Must Be Created For Content Moderation, Says Head of European Court of Human Rights

March 16, 2021—The president of the European Court of Human Rights is recommending an autonomous legal body that oversees a digital framework for content moderation. Róbert Spanó said Tuesday, at a talk hosted by the German Marshall Fund of the United States, that the framework would serve as a digi

Legal Digital Framework Must Be Created For Content Moderation, Says Head of European Court of Human Rights
Screenshot of judge Róbert Spanó from German Marshall Fund of the United States event

March 16, 2021—The president of the European Court of Human Rights is recommending an autonomous legal body that oversees a digital framework for content moderation.

Róbert Spanó said Tuesday, at a talk hosted by the German Marshall Fund of the United States, that the framework would serve as a digital version of legal and due process principles that would be played out over the internet, unconstrained by the borders that generally restrain traditional legal systems, and ensuring tech companies abide are kept in-line to suppress hate and content that incites violence.

Spanó, who is a judge and has served on the European court since 2013, was pressing the importance of content moderation in the digital age. This week, the South by Southwest conference has played host to discussions about reforming Section 230 and content moderation.

“What the internet does is it creates an environment where certain interactions are occurring outside the classical paradigm of human interaction being regulated by governmental power,” Spanó said. He explained that even though this digital environment is sustained by private actors instead of the government, it should not be immune to classical rule of law and due process principles.

Spanó said for this environment to sustain itself, platforms in the digital space must craft a framework that emulates those principles.

Facebook, for its part, has established an autonomous Oversight Board to provide it with recommendations on what it should do about certain content.

But tech approaches to content moderation have been largely patchwork: Twitter and Facebook use a blend of artificial intelligence and human moderators, but Patreon, a website that facilitates payments to creators, uses only human moderators. And their approach to moderation can be radically divergent.

Spanó proposed a system of governance that exists only in the digital realm. While this system would not operate in the traditional way where courts are limited by physical jurisdiction, it would still promote classical legal principles, albeit in an expedited fashion to match the breakneck speed that discourse occurs online.

He added that this system could be adjudicated by judges that are specially trained to operate in this digital realm. “I think it is a…duty of judges to re-educate ourselves about issues that arise,” Spanó said.

He cautioned that a failure to establish a framework with these goals would inevitably result in either threats to a citizens’ autonomy or the rise of arbitrary decisions at the hands of those in power—whether they be private or otherwise.

Tech companies explain their moderation philosophies

On the same day Judge Spanó gave his talk, South by Southwest Online hosted a panel of experts representing the Oversight Board, Twitter, and Patreon, where they addressed their respective roles, concerns, and goals. They discussed scenarios ranging from the simple mislabeling of what they consider to be age-restricted content, all the way to violent extremism and hate speech.

Rachel Wolbers is the public policy manager for Oversite Board. Like Twitter’s hybrid model, Oversite Board is made up exclusively of human board members who adjudicate content moderation decisions that may have first been identified by AI on Facebook.

For the Oversight Board to address an issue, it must first be identified by Facebook as a potential rule violation—whether that violation is identified by AI or a human moderator. After an issue is settled by Facebook, the alleged violator can choose to either accept Facebook’s ruling, or ultimately petition Oversite Board to look over the post in question.

The Oversite Board only chooses to look more closely at a handful of cases, and so far has only made seven decisions. Out of the seven cases it has analyzed, it has upheld Facebook’s ruling on a single occasion regarding an ethnic slur directed towards Azerbaijanis.

Wolbers offered that in the future, if other platforms were interested in utilizing Oversight Boards services, Oversight Board would be receptive to the idea. Because the Oversite Board is still in its infancy, its role in the broader digital landscape remains to be seen, but it is perhaps a precursor to wider frameworks that Judge Spanó alluded to.