Connect with us

Free Speech

Part II: Senators Josh Hawley and Ted Cruz Want to Repeal Section 230 and Break the Internet

Emily McPhie

Published

on

Photo of Reddit Director of Policy Jessica Ashooh courtesy of Misk Global Forum

WASHINGTON, August 20, 2019 — Section 230 of the Communications Decency Act has been termed one of the most important and most misunderstood laws governing the internet.

In recent months, prominent critics from both sides of the aisle have called for the statute to either be repealed or altered so significantly that, if enacted, it would no longer serve its original purpose.

Sen Josh Hawley, R-Mo., introduced a bill that would eliminate Section 230 protections for big tech platforms unless they could prove their political neutrality to the Federal Trade Commission every two years. Sen. Ted Cruz, R-Texas, has called for the statute to be repealed altogether.

But any such proposal should first carefully consider Section 230’s unique role in the digital ecosystem.

The concept behind Section 230 has its origins in the First Amendment

The statute’s basic premise — protecting the rights of speakers by limiting the liability of third parties who enable them to reach an audience — is hardly new; the First Amendment has served that purpose for decades.

In the 1959 case Smith v. California, the Supreme Court ruled that booksellers could not be held liable for obscene content in the books being sold, because the resulting confusion and caution would lead to over-enforcement, or “censorship affecting the whole public.”

Five years later, the court ruled in New York Times Co v. Sullivan that failing to protect newspapers from liability for third party advertisements would discourage them from doing so, and therefore shut off “an important outlet for the promulgation of information and ideas by persons who do not themselves have access to publishing facilities.”

“In theory, the First Amendment — the global bellwether protection for free speech — should partially or substantially backfill any reductions in Section 230’s coverage,” wrote Eric Goldman, a law professor at Santa Clara University, in an April blog post. “In practice, the First Amendment does no such thing.”

In a paper titled “Why Section 230 Is Better Than the First Amendment,” Goldman explained some of the “significant and irreplaceable substantive and procedural benefits” that are unique to the controversial statute.

Section 230 has pragmatic applications for a range of legal claims

For one, Section 230 has pragmatic applications for defamation, negligence, deceptive trade practices, false advertising, intentional infliction of emotional distress, and dozens of other legal doctrines, some of which have little or no First Amendment defense.

In addition, Section 230 offers more procedural protections and greater legal certainty for defendants. It enables early dismissals, which can save smaller services from financial ruin. It is more predictable than the First Amendment for litigants. It preempts conflicting state laws and facilitates constitutional avoidance.

Most major tech platforms support Section 230, and experts widely agree that the internet would not have been able to develop without the protection of such a law.

“If we were held liable for everything that the users potentially posted…we fundamentally would not be able to exist,” said Jessica Ashooh, Reddit’s director of policy, at a July forum.

But also in July, one prominent tech company broke with the others to support a “reasonable care” standard like that proposed by Danielle Citron and Benjamin Wittes, law professors at the University of Maryland.  IBM Executive Ryan Hagemann wrote in a blog post that this “would provide strong incentives for companies to limit illegal and illicit behavior online, while also being flexible enough to promote continued online innovation.”

Should online platforms be responsible for deleting objectively harmful content?

Companies should be held legally responsible for quickly identifying and deleting content such as child pornography or the promotion of mass violence or suicide, Hagemann continued. Adding this standard to Section 230 “would add a measure of legal responsibility to what many platforms are already doing voluntarily.”

But Goldman took a different tack. He strongly cautioned against proposals offering Section 230 protections only to defendants who were acting in so-called good faith, warning that “such amorphous eligibility standards would negate or completely eliminate Section 230’s procedural benefits.”

Hagemann, on the other hand, has defended the importance of a compromise-oriented middle ground. Current rhetoric from Congress suggests that changes to the statue are imminent, he said at a panel two weeks after IBM’s statement, and finding a compromise will prevent an extreme knee-jerk reaction from lawmakers who may not view the digital economy with the necessary nuance.

Senators Cruz and Hawley are gunning for effective repeal of Section 230

And as feared, members of Congress such as Cruz and Hawley have skipped right over compromise and started calling for the complete evisceration of Section 230.

Few would claim that Section 230 is perfect; it was written for a digital landscape that has since evolved in previously unimaginable ways. But allowing a body of five commissioners to determine the vague standard of “politically neutral” every two years would almost certainly lead to extreme inconsistency and partisanship.

Moreover, some fear that — contrary to Hawley’s stated intent — his bill might actually be the one thing that cements the major tech giants in their current place of power.

“Even if its initial application were limited to websites above a certain size threshold, that threshold would be inherently arbitrary and calls to lower it to cover more websites would be inevitable,” said TechFreedom President Berin Szóka.

Rather than keeping tech giants like Facebook and Google in check, conditioning Section 230 protections on perceived neutrality could actually benefit them by stifling any potential competition.

“At a time when we’re talking about antitrust investigations and we’re wondering if the biggest players are too big, the last thing we want to do is make a law that makes it harder for smaller companies to compete,” said Ashooh of Reddit.

“Admittedly, it feels strange to tout Section 230’s pro-competitive effect in light of the dominant marketplace positions of the current Internet giants, who acquired their dominant position in part due to Section 230 immunity,” wrote Goldman. “At the same time, it’s likely short-sighted to assume that the Internet industry has reached an immutable configuration of incumbents.”

Other articles in this series:

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Emily McPhie was Assistant Editor with Broadband Breakfast. She studies communication design and writing at Washington University in St. Louis, where she is a news editor for campus publication Student Life. She is a founding board member of Code Open Sesame, an organization that teaches computer skills to underprivileged children in six cities across Southern California.

Courts

Supreme Court Declares Trump First Amendment Case Moot, But Legal Issues For Social Media Coming

Benjamin Kahn

Published

on

Photo of Justice Clarence Thomas in April 2017 by Preston Keres in the public domain

April 5, 2021—Despite accepting a petition that avoids the Supreme Court deliberating on whether a president can block social media users, Justice Clarence Thomas on Monday issued a volley that may foreshadow future legal issues surrounding social media in the United States.

On Monday, the Supreme Court sent back to a lower court and ruled as moot a lawsuit over whether former President Donald Trump could block followers on Twitter, after accepting a petition by the federal government to end the case because Trump wasn’t president anymore.

The case dates back to March 2018, when the Knight First Amendment Institute and others brought a case against former president Trump in the Southern District of New York for blocking users based on their political views, arguing the practice is a violation of the first amendment.

The lower court judge agreed, and the decision was upheld by the United States Court of Appeals.

In accepting the petition by the government, Justice Thomas stated that adjudicating legal issues surrounding digital platforms is uniquely difficult. “Applying old doctrines to new digital platforms is rarely straightforward,” he wrote. The case in question hinged on the constitutionality of then-President Trump banning people from interacting with his Twitter account, which the plaintiff argued was a protected public forum.

Thomas stated that while today’s conclusion was able to be vacated, that likely would not be the case in the future. He went on to say that digital platforms exercise “concentrated control of so much speech in the hands of a few private parties.”

He continued: “We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”

Even though Facebook and Google were not the platforms in question in this case, Thomas pointed to them as “dominant digital platforms” and stated that they have “enormous control over speech.” He stated that Google, Facebook, and Twitter have the capabilities to suppress information and speech at will, and referenced the “cataclysmic consequences” for authors that Amazon disagrees with.

Thomas also rejected the notion that other options exist.

“A person always could choose to avoid the toll bridge or train and instead swim the Charles River or hike the Oregon Trail. But in assessing whether a company exercises substantial market power, what matters is whether the alternatives are comparable.”

Continue Reading

Section 230

Sen. Mike Lee Promotes Bills Valuing Federal Spectrum, Requiring Content Moderation Disclosures

Tim White

Published

on

Screenshot of Mike Lee taken from Silicon Slopes event

April 5, 2021 – Sen. Mike Lee, R-Utah, said Friday spectrum used by federal agencies is not being utilized efficiently, following legislation he introduced early last year that would evaluate the allocation and value of federally-reserved spectrum.

The Government Spectrum Valuation Act, or S.553 and introduced March 3, directs the National Telecommunications and Information Administration to consult with the Federal Communications Commission and Office of Management and Budget to estimate the value of spectrum between 3 kilohertz and 95 gigahertz that is assigned to federal agencies.

Lee spoke at an event hosted by the Utah tech association Silicon Slopes on Friday about the legislation, in addition to other topics, including Section 230.

Some bands on the spectrum are reserved for federal agencies as they need it, but it’s not always managed efficiently, Lee said. Some are used by the Department of Defense for ‘national security,’ for example, but when asked what that spectrum is used for, we’re told, ‘we can’t tell you because of national security,’ he said.

“Just about everything we do on the internet is carried out through a mobile device, and all of that requires access to spectrum,” he said.

He said that lives are becoming more affected and enhanced by our connection to the internet, often through a wireless connection, which is increasing the need for the government to efficiently manage spectrum bandwidth, he said. Some of the bands are highly valuable, he said, comparing them to the “beach front property” of spectrum.

Legislation changing Section 230

Lee also spoke on Section 230, a statute that protects online companies from liability for content posted by their users. It’s a hot topic for policymakers right now as they consider regulating social media platforms.

Both Republicans and Democrats want more regulation for tech companies, but for different reasons. Democrats want more moderation against alleged hate speech or other content, citing the January 6 riot at the Capitol as one example of not enough censorship. Republicans on the other hand, including Lee, allege social media companies censor or remove right-leaning political content but do not hold the same standard for left-leaning content.

Lee highlighted that platforms have the right to be as politically-biased as they want, but it’s a problem when their terms of service or CEOs publicly state they are neutral, but then moderate content from a non-neutral standpoint, he said.

Lee expressed hesitation about repealing or changing Section 230. “If you just repealed it altogether, it would give, in my view, an undo advantage to big market incumbents,” he said. One solution is supplementing Section 230 with additional clarifying language or new legislation, he said.

That’s why he came up with the Promise Act, legislation he introduced on February 24 that would require the disclosure of rules for content moderation, and permit the Federal Trade Commission to take corrective action against companies who violate those disclosed rules. “I don’t mean it to be an exclusive solution, but I think it is a reasonably achievable step toward some type of sanity in this area,” he said.

Senator Amy Klobuchar, D-Minn., and a couple of her colleagues also drafted Section 230 legislation that would maintain the spirit of the liability provision, but would remove it for paid content.

Continue Reading

Section 230

Pressed by Congress, Big Tech Defends Itself and Offers Few Solutions After Capitol Riot

Tim White

Published

on

Photo of Google CEO Sundar Pichai from a December 2018 hearing before the House Judiciary Committee by Drew Clark

March 26, 2021 – The heads of the largest social media companies largely defended their platforms, reiterated what they’ve done, and offered few solutions to the problems that ail them during a congressional hearing Thursday.

But, under harsh questioning from the House Energy and Commerce Committee, none of the CEOs of Google, Facebook or Twitter were given chance to respond to questions for more than 30 to 60 seconds on a given topic.

The hearing was about misinformation on social media in the fallout of the January 6 Capitol riot. The CEOs said dealing with the problem of dis- and misinformation on their platforms is more difficult than people think.

“The responsibility here lies with the people who took the actions to break the law and do the insurrection,” Facebook CEO Mark Zuckerberg said in response to a question about whether the platforms were to blame for the riot.

“Secondarily, also, the people who spread that content, including the president, but others as well, with repeated rhetoric over time, saying that the election was rigged and encouraging people to organize. I think those people bare the primary responsibility as well,” Zuckerberg said.

Zuckerberg added that “polarization was rising in America long before social networks were even invented,” he said. He blamed the “political and media environment that drives Americans apart.”

A ‘complex question’ of fault

Google CEO Sundar Pichai said it’s a “complex question” in response to the question of who’s at fault for the riot. Twitter CEO Jack Dorsey, however, was more direct: “Yes, but you also have to take into consideration a broader ecosystem; it’s not just about the technology platforms we use,” he said.

It was the first time Zuckerberg, Dorsey and Pichai appeared on Capitol Hill since the January 6 insurrection at the U.S. Capitol. The hearing was spurred by the riot and the turbulent presidential election that concluded in Joe Biden’s win and Donald Trump’s ban from Twitter and Facebook. Congress has turned their eye toward the social media companies for several months on possible Section 230 reform to address the alleged problems in the tech industry.

“Our nation is drowning in misinformation driven by social media. Platforms that were once used to share kids with grandparents are all-too-often havens of hate, harassment and division,” said Rep. Mike Doyle, D-Penn., chairman of the Communications and Technology subcommittee, who led the hearing. Doyle alleged the platforms “supercharged” the riot.

Both Democratic and Republican members of the committee laid out a variety of grievances during the five-hour meeting, and while they didn’t all share the same concerns, all agreed that something needs to be done.

“I hope you can take away from this hearing how serious we are, on both sides of the aisle, to see many of these issues that trouble Americans addressed,” Doyle said.

Congressional concerns

On the left side of the political aisle the main criticism against the tech giants was the spread of misinformation and extremism, including COVID-19 vaccines, climate change and the 2020 presidential election that Trump alleged was rigged against him.

“It is not an exaggeration to say that your companies have fundamentally and permanently transformed our very culture, and our understanding of the world,” said Rep. Jan Schakowsky, D-Illinois. “Much of this is for good, but it is also true that our country, our democracy, even our understanding of what is ‘truth’ has been harmed by the proliferation and dissemination of misinformation and extremism,” she said.

“Unfortunately, this disinformation and extremism doesn’t just stay online, it has real-world, often dangerous and even violent consequences, and the time has come to hold online platforms accountable,” said Rep. Frank Pallone, D-N.J.

From the right, Republican members voiced concerns about too much censorship, easy access to opioids, and the harm on children they said social media has.

“I’m deeply concerned by your decisions to operate your companies in a vague and biased manner, with little to no accountability, while using Section 230 as a shield for your actions and their real-world consequences,” said Rep. Bob Latta, R-Ohio. “Your companies had the power to silence the president of the United States, shut off legitimate journalism in Australia, shut down legitimate scientific debate on a variety of issues, dictate which articles or websites are seen by Americans when they search the internet,” he said.

“Your platforms are my biggest fear as a parent,” said Rep. Cathy McMorris Rodgers, R-Washington, expressing frustration over the impact that social media has on children. “It’s a battle for their development, a battle for their mental health, and ultimately, a battle for their safety,” she said, citing a rise of teen suicides since 2011. “I do not want you defining what is true for them, I do not want their future manipulated by your algorithms,” she said.

Platforms say it’s challenging, reiterate initiatives

In response to the many criticisms, Zuckerberg made it clear that while moderating content is central to address misinformation, it is important to protect speech as much as possible while taking down illegal content, which he said can be a huge challenge. As an example, bullying hurts the victim but there’s not a clear line where we can just censor speech, he said.

Pichai said that Google’s mission is about organizing and delivering information to the world and allowing free expression while also combatting misinformation. But it’s an evolving challenge, he said, because approximately 15 percent of google searches each day are new, and 500 hours of video are uploaded to YouTube every minute. To reinforce that point, he cited the fact that 18 months ago no one had heard of COVID-19, and in 2020 ‘coronavirus’ was the most trending search.

Dorsey expressed a similar sentiment about the evolving challenge of balancing freedom of expression with content moderation. “We observe what’s happening on our service, we work to understand the ramifications, and we use that understanding to strengthen our operations. We push ourselves to improve based on the best information we have,” he said.

The best way to face new challenges is to narrow down the problem to have the greatest impact, Dorsey said. For example, disinformation is a broad concept, and we focused on disinformation leading to offline harm, he said. Twitter worked on three specific categories, he said, these included manipulated media, public health and civic integrity.

“Ultimately, we’re running a business, and a business wants to grow the number of customers it serves. Enforcing a policy is a business decision,” Dorsey said.

Dorsey noted Twitter’s new Bluesky project, a decentralized internet protocol that various social media companies would be able to utilize, rather than being owned by a single company. It will improve the social media environment by increasing innovation around business models, recommended algorithms, and moderation controls in the hands of individuals instead of private companies, he said. But others already working in a similar technology space say the project is not without its problems.

On Section 230 reform

On the question of changing Section 230 of the Telecommunications Act, which grants social media companies immunity from liability for user-generated content, Zuckerberg suggested two specific changes: Platforms need to issue transparency reports about harmful content, and need better moderation for content that is clearly illegal. These changes should only affect large social media platforms, he said, but did not specify the difference between a large and small platform.

Dorsey said those may be good ideas, but it could be difficult to determine what is a large and small platform, and having those stipulations may incentivize the wrong things.

When asked about Instagram’s new version for children, Zuckerberg confirmed it was in the planning stage and many details were still being worked out.

Several Democrats raised concerns about minority populations, citing as one example the March 16 shooting in Atlanta that killed eight people including several Asian American women. Rep. Doris Matsui, D-Cal., asked why various hashtags such as #kungflu and #chinavirus were not removed from Twitter.

Dorsey responded that Twitter does take action against hate speech, but it can also be a challenge because it’s not always simple to distinguish between content that supports an idea and counter speech that condemns the support of that idea.

The tech leaders were asked by multiple members about the platform algorithms failing to catch specific instances of content moderation. Democrats referred to examples of posts containing misinformation or hate speech, while Republicans used examples of conservative-based content being removed.

Both Zuckerberg and Dorsey said that their systems are not perfect and it’s not realistic to expect perfection. Some content will always slip by our radars that we have to address individually, Zuckerberg said.

In response to Rep. Steve Scalise’s reference to a 2020 New York Post story about Hunter Biden that was taken down, Dorsey said we have made mistakes in some instances.

Editor’s Note: This story has been revised to add in a second paragraph that more accurately captured the fact that, while the tech executives offered few solutions, they were given little opportunity to do so by members of Congress. Additionally, the word “secondarily” was added back into Facebook CEO Mark Zuckerberg’s statement about who bore responsibility for the insurrection.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending