Panelists Call for Stakeholder Collaboration to Establish Trust in Content Moderation Processes

June 24, 2020 — Four members of the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression detailed the group’s newly released report entitled “Freedom and Accountability: A Transatlantic Framework for Moderating Speech Online,” on a Tuesday webinar hosted by t

Panelists Call for Stakeholder Collaboration to Establish Trust in Content Moderation Processes
Screenshot of Jeff Jarvis, professor at the Newmark School of Journalism, from the Aspen Institute webinar

June 24, 2020 — Four members of the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression detailed the group’s newly released report entitled “Freedom and Accountability: A Transatlantic Framework for Moderating Speech Online,” on a Tuesday webinar hosted by the Aspen Institute.

The report is the culmination of two years of investigation and input by 28 legislators, government officials, tech executives, civil society leaders and academics spanning across North America and Europe.

“Members of the group held the common goal of seeking and promoting the best practices to tackle hate speech, violent extremism, and disinformation online, without chilling freedom of expression or further fracturing the Internet,” said Susan Ness, distinguished fellow at the Annenberg Public Policy Center.

In the report, the group calls for greater transparency in content moderation and offers a framework to lead to greater platform accountability.

“Transparency is the way towards accountability,” said Jeff Jarvis, a professor at the Newmark School of Journalism. “Data about what content is moderated by platforms must be made transparent to the research community in order to create evidence-based policy.”

According to the panel, increased transparency would not only foster more quantitative data for future policy decisions, but would also work to establish trust among stakeholders, a step the panelists deemed necessary in the production of more democratic platforms.

Panelists argued that restoring trust among government, tech companies and the public was necessary to tackle future moderation challenges, as finding solutions will require all of these entities to work together.

Eilleen Donahoe, executive director of Stanford Global Digital Policy, called for increased collaboration among all stakeholders, noting that netizens’ voices need to be particularly amplified in the conversation around moderation decisions.

Donahoe argued that a lack of understanding is affecting the institutions involved, stating that “governments are confused about whether they want the private sector to take down more content, while private sector companies are confused on the rules that apply to them, what they should prohibit, and what the best course of action is.”

The Transatlantic Working Group’s methods

To create their framework, the group held round table hearings with specific tech companies. Through this process, the group established what they hope will be a long tradition of using research and evidence to foster more democratic speech environments online.

The members quickly realized that there is no one-size-fits-all approach to moderating content. Due to this, the report’s framework focused on valuing transparency and accountability in content moderation, as these are standards that are upheld in democracies across the globe.

While the group’s report found that transparency is the most crucial aspect in future moderation, members pointed out that platforms are extremely wary to release this data.

To overcome this, one of the five central recommendations in the report called for establishing a multi-tier disclosure system, so that only data that must be known is released, and only to those necessary.

“What information do researchers need to know? What does the public need to know? These are extremely difficult questions to ask, but the present system is no system at all,” Jarvis said.

A second recommendation called for platforms to establish effective redress mechanisms, such as social media councils. Facebook’s Oversight Board is a prime example of this type of external regulation entity, panelists said.

A third recommendation calls for targeting regulation at individual malicious actors rather than content at large, as “going against bad actors is more effective than targeting content and has less of a chilling effect,” Donahoe said.

The search for solutions going forward

“The solution will have to be a combination of automation, human moderation and platform initiatives,” Donahoe said. “There is no silver bullet solution.”

“Revoking section 230 is not the solution, as imposing liability on platforms for user generated speech would have consequences for expression that are gigantic,” she continued.

“We need to encourage platforms to do more to protect democracy in the name of their own free expression,” she said. “The private sector has immense power — they should choose to accept it.”

“This is just the beginning of lots of research that needs to be done alongside tech companies,” Ness concluded. “It cannot be done by just the government or stakeholders alone.”

Many of the arguments made in the Transatlantic Working Group’s report resonated with panelists participating in a webinar hosted by New America on Tuesday evening.

Kate Klonick, an assistant professor of law at St. John’s University Law School, echoed the report’s ideals, saying that “we have to move towards creating transparency and accountability.”

David Kaye, a clinical professor of law at the University of California, similarly urged that it was critical to know “what set of principles is being used to make moderation decisions.”

One additional suggestion Klonick offered was creating better user participation buttons, to collect data on how netizens think certain content should be moderated and to improve systems.

Popular Tags