June 22, 2020 — Legal experts disagreed on Monday about whether Section 230 is exacerbating or helping to solve problems with the current state of online content moderation.
“I think the real friction point is because Section 230 is counterintuitive — it says that you can exercise control and not be liable,” said Eric Goldman, professor at the Santa Clara University School of Law, speaking at a Monday panel hosted by Yale Law School. “And I think at the core there’s just some people who can’t accept that type of deal.”
Miami Law School Professor Mary Anne Franks claimed that the biggest problem with Section 230 was that it protects an overly broad category of internet content by treating all online activity as speech, extending too many protections to various platforms.
“We’re presuming, first of all, that none of these actors —whether you’re Facebook or a blog or anything else — ever has any responsibility for… facilitating content that causes extraordinary harm, even if they’re fully aware of it, even if they’re making profits from it, even if it’s completely perceivable how injurious it would be,” Franks said.
“I don’t think you have to read 230 that way and I think it’s a real shame that courts have,” she said.
Would changes to Section 230 disincentivize platforms from moderating content?
Removing liability protections would disincentivize platforms from doing their best to moderate content, and although these efforts have been and will continue to be imperfect, they are preferable to no moderation at all, Goldman said.
“My fear is any of the regulatory reforms that are being discussed today don’t actually value the ability of people to talk to each other,” he added. “If we're going to be able to preserve that conversation, we have to make sure we understand how the regulatory changes might undermine that.”
The current system has failed to protect online speech for certain demographics, said Franks.
“If you ask women, if you ask racial minorities, if you ask sexual minorities how freely they feel that they can speak online, listen to them tell you about the death threats and the revenge porn and the doxing and the stalking and all the rest of it that’s actually making it very hard for them to speak,” she said.
Rather than accepting the status quo as perfect, Franks said, policymakers should ask how they can address various preventable and foreseeable harms.
Potentially removing ‘revenge porn’ from Section 230 immunity
One such action, she suggested, could be declaring nonconsensual pornography and violations of sexual privacy to be federal criminal offenses, thus removing such content from Section 230 immunity.
“If we’re going to take some lessons from the Good Samaritan parable, it’s that people do terrible things and lots of people let them do it,” Franks said. “And it’s only the people who don’t let them do it and who are not in fact benefiting from that terrible thing that deserve a reward or immunity.”
But perfect content moderation is impossible, Goldman countered. The conversation should focus on how policies can be calibrated for better outcomes.
“I think that our politicians are now prepared to burn down the internet in the name of trying to fix it,” he said. “And I don’t think they realize how much we as individuals derive personal value from the internet.”
The country’s ability to transition to remote working and education during the coronavirus pandemic is in large part due to the prevalence of user generated content services, Goldman claimed.
“If we value that, [we should] make sure that we’re thinking about how we can preserve that and consider how easy it is to take for granted — not only that we have it, but that the legal framework is helping us get it,” he said.
Although he acknowledged that further changes to Section 230 were likely inevitable, Jeff Kosseff, assistant professor of cybersecurity law at the United States Naval Academy, cautioned against making broad generalizations painting the statute as the root of all internet problems.
Instead, he said that many current discussions about the statute should be focused on privacy laws or the First Amendment.
“The first question should be, ‘Is this a Section 230 problem?’” he said.
- Online Speech Has Harmful Effects on Both Individuals and Society, According to Mary Anne Franks
- Pandemic Has Created an Environment for Consumer Fraud, Say Congressional Leaders
- Breakfast Media Minute: July 10, 2020
- Metrics and Automation Can Improve Federal Cybersecurity Measures
- Federal Communications Commission Must Reconsider Ligado Offer, Says Former Commissioner
Signup for Broadband Breakfast
Fiber1 month ago
Fiber Networks Hold a Cybersecurity Advantage Over Rival Co-Axial and Wireless Technologies, Say Panelists
Artificial Intelligence3 weeks ago
Brookings Panelists Emphasize Importance of Addressing Biases in Artificial Intelligence Technology
Artificial Intelligence1 week ago
U.S. State Department Employing Artificial Intelligence Against COVID-19 Misinformation
Congress1 month ago
Partisan Disagreement Delays Broadband Funding That Might Come Through HEROES Act
Broadband Roundup1 week ago
Artificial Intelligence Task Force, State Cybersecurity, ADTRAN Offers Rural Funding Guidance
#broadbandlive3 weeks ago
Broadband Breakfast Live Online on Wednesday, June 17, 2020 – Federal Broadband Funds and Opportunity Zones
Expert Opinion1 month ago
Gary Bolton: Under the Stress of COVID-19, the Networks That Held Fast Were Symmetrical Fiber Broadband
Education2 weeks ago
A Mix of Resources and Technologies Are Needed to Close the Homework Gap