WASHINGTON, July 22, 2019 — The primary responsibility of moderating online platforms lies with the platforms themselves, making Section 230 protections essential, said panelists at New America’s Open Technology Institute on Thursday.
Although some policymakers are attempting to solve the problem of digital content moderation, Open Technology Institute Director Sarah Morris noted that the First Amendment limits the government’s ability to regulate speech, leaving platforms to handle “the vast majority of decision making.”
“Washington lawmakers don’t have the capacity to address these challenges,” said Francella Ochillo, executive director of Next Century Cities.
It’s up to tech companies to do more than they are currently doing to tackle hate speech on their platforms, said David Snyder, executive director of the First Amendment Coalition.
Facebook Public Policy Manager Shaarik Zafar acknowledged that the tech community needs to do a better job of enforcing its policies—and on creating those policies in the first place.
Content moderation is an extremely difficult process, he said. Although algorithms are fairly good at detecting terrorism and child exploitation, other issues can be more difficult, such as trying to distinguish between journalists and activists raising awareness of atrocities versus people glorifying violent extremism.
No single solution can eliminate hate speech, and any solution found will have to be frequently revisited and updated, said Ochillo. But that doesn’t mean that platforms and others shouldn’t be a significant effort, she said, pointing out that people suffer real-world secondary effects from hateful content posted online.
Zafar emphasized Facebook’s commitment to onboarding additional civil rights expertise as the platform continues to tackle the problem of hate speech.
He also highlighted Facebook’s recently announced external oversight board, which will be made up of a diverse group of experts with experience in content, privacy, free expression, human rights, safety, and other relevant disciplines.
Facebook would defer to the board on difficult content moderation questions, said Zafar, and would follow their recommendation even when company executives disagree.
But as companies take steps to fine-tune and enforce their terms of service, transparency is of the utmost importance, Snyder said.
Content moderation algorithms should be made public so that independent researchers can test them for bias, suggested Sharon Franklin, OTI’s director of surveillance and cybersecurity policy.
Franklin also highlighted the Santa Clara Principles, a set of guidelines for transparency and accountability in content moderation. The principles call on companies to publish the numbers of posts removed and accounts suspended, provide notice to users whose content or account is removed, and create a meaningful appeal process.
Allowing content moderation under Section 230 of the Communications Decency Act has spurred innovation and made it possible for individuals and companies to have access to massive audiences through social media, said Zafar.
Without those protections, he continued, companies might choose to forgo content moderation altogether, leaving all sorts of hate speech, misinformation, and spam on the platforms to the point that they might actually become unusable.
The other potential danger of repealing the law would be companies airing on the side of caution and over-enforcing policies, said Franklin. Section 230 actually leads to less censorship because it allows for nuanced content moderation.
The Open Technology Institute supports Section 230 and is very concerned about the recent attacks that have been made on it, Franklin added.
Section 230 is “far from perfect,” said Snyder, but it’s much better than any of the plans that have been proposed to modify it or than not having it at all.
Facebook and other platforms give voice to a wide range of ideologies, and people from all backgrounds are able to successfully gain significant followings, said Zafar, emphasizing that the company’s purpose is to serve everybody.
(Photo of New America event by Emily McPhie.)
- Advocates for Antitrust Enforcement Say Consumer Welfare Standard Only One Layer of Competition Law
- In Law More Than a Year, MOBILE Now Advocates Say Act Requires Further Implementation for 5G Deployment
- Broadband Roundup: Texas Reaches T-Mobile Settlement, Closing the ‘Homework Gap,’ Broadcast Ownership
- UTOPIA Fiber Announces Completion of Latest Round of Funding, a $48 Million Network Expansion
- Prakash Sangam: China’s Huawei Clones Are Greater Threat to National Security than Huawei
Signup for Broadband Breakfast
Intellectual Property4 months ago
In Congressional Oversight Hearing, Register of Copyrights Says Office Is Responding to Online Users
Broadband Data6 months ago
California Report: Income Most Significant Factor in Low Broadband Adoption
Privacy and Security3 months ago
Comparing Privacy Policies for Wearable Fitness Trackers: Apple, Fitbit, Xiaomi and Under Armour
Antitrust3 months ago
Addressing the Impact of Big Data Upon Antitrust is More Complicated Than a Big Tech Breakup
Expert Opinion5 months ago
Geoff Mulligan: A ‘Dumb’ Way to Build Smart Cities
Antitrust3 months ago
Broadband Roundup: Everyone (Almost) Gangs Up on Google, Muni Broadband Fact Sheet, SHLB Anchornet Conference
Broadband Roundup4 months ago
Cable Industry Touts Energy Efficiency, Next Century Highlights Open Access Fiber, Aspen Forum Set
Broadband's Impact5 months ago
Law Enforcement and Advocates of Facial Recognition Technologies Battle Misconceptions