Connect with us

Big Tech

A Short History of Online Free Speech, Part I: The Communications Decency Act Is Born

Published

on

Photo of Chuck Grassley in April 2011 by Gage Skidmore used with permission

WASHINGTON, August 19, 2019 — Despite all the sturm und drang surrounding Section 230 of the Communications Decency Act today, the measure was largely ignored when first passed into law 23 years ago. A great deal of today’s discussion ignores the statute’s unique history and purposes as part of the short-lived CDA.

In this four-part series, Broadband Breakfast reviews the past with an eye toward current controversies and the future of online free speech.

This article looks at content moderation on early online services, and how that fueled concern about indecency in general. On Tuesday, we’ll look at how Section 230 is similar to and different from America’s First Amendment legacy.

On Wednesday, in Part III, Broadband Breakfast revisits the reality and continuing mythology surrounding the “Fairness Doctrine.” Does it or has it ever applied online? And finally, on Thursday, we’ll envision what the future holds for the legal treatment of “hate speech.”

While most early chat boards did not moderate, Prodigy did — to its peril

The early days of the internet were dominated by online service providers such as America Online, Delphi, CompuServe and Prodigy. CompuServe did not engage in any form of content moderation, whereas Prodigy positioned itself as a family-friendly alternative by enforcing content guidelines and screening offensive language.

It didn’t take long for both platforms to be sued for defamation. In the 1991 case Cubby v. CompuServe, the federal district court in New York ruled that CompuServe could not be held liable for third party content of which it had no knowledge, similar to a newsstand or library.

But in 1995, the New York supreme court ruled in Stratton Oakmont v. Prodigy that the latter platform had taken on liability for all posts simply by attempting to moderate some, constituting editorial control.

“That such control is not complete…does not minimize or eviscerate the simple fact that Prodigy has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards,” the court wrote.

Prodigy had more than two million subscribers, and they collectively generated 60,000 new postings per day, far more than the platform could review on an individual basis. The decision gave them no choice but to either do that or forgo content moderation altogether.

Many early supporters of the internet criticized the ruling from a business perspective, warning that penalizing online platforms for attempting to moderate content would incentivize the option of not moderating at all. The resulting platforms would be less useable, and by extension, less successful.

The mid-1990s seemed to bring a cultural crises of online indecency

But an emerging cultural crisis also drove criticism of the Stratton Oakmont court’s decision. As a myriad of diverse content was suddenly becoming available to anyone with computer access, parents and lawmakers were becoming panicked about the new accessibility of indecent and pornographic material, especially to minors.

A Time Magazine cover from just two months after the decision depicted a child with bulging eyes and dropped jaw, illuminated by the ghastly light of a computer screen. Underneath a bold title reading “cyberporn” in all caps, an ominous headline declared the problem to be “pervasive and wild.”

And then it posed the question that was weighing heavily on certain members of Congress: “Can we protect our kids — and free speech?”

The foreboding study behind the cover story, which was entered into the congressional record by Sen. Chuck Grassley, R-Iowa, was found to be deeply flawed and Time quickly backpedaled. But the societal panic over the growing accessibility of cyberporn continued.

Thus was born the Communications Decency Act, meant to address what Harvard Law Professor Howard Zittrain called a “change in reality.” The law made it illegal to knowingly display or transmit obscene or indecent content online if such content would be accessible by minors.

Challenges in keeping up with the sheer volume of indecent content online

However, some members of Congress felt that government enforcement would not be able to keep up with the sheer volume of indecent content being generated online, rendering private sector participation necessary.

This prompted Reps. Ron Wyden, D-Ore., and Chris Cox, R-Calif., to introduce an amendment to the CDA ensuring that providers of an interactive computer service would not be held liable for third-party content, thus allowing them to moderate with impunity.

Section 230 — unlike what certain politicians have claimed in recent months — held no promise of neutrality. It was simply meant to protect online Good Samaritans trying to screen offensive material from a society with deep concerns about the internet’s potential impact on morality.

“We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see,” Cox told his fellow representatives.

“Not even a federal internet censorship army would give our government the power to keep offensive material out of the hands of children who use the new interactive media,” Wyden said. Such a futile effort would “make the Keystone Cops look like crackerjack crime-fighters,” he added, referencing comedically incompetent characters from an early 1900s comedy.

The amendment was met with bipartisan approval on the House floor and passed in a 420–4 vote. The underlying Communications Decency Act was much more controversial. Still, it was signed into law with the Telecommunications Act of 1996.

Although indecency on radio and TV broadcasts have long been subject to regulation by the Federal Communications Commission, the CDA was seen as an assault on the robust world of free speech that was emerging on the global internet.

Passage of the CDA as part of the Telecom Act was met with online outrage.

The following 48 hours saw thousands of websites turn their background color to black in protest as tech companies and activist organizations joined in angry opposition to the new law.

Critics argued that not only were the terms “indecent” and “patently offensive” ambiguous, it was not technologically or economically feasible for online platforms and businesses to screen out minors.

The American Civil Liberties Union filed suit against the law, and other civil liberties organizations and technology industry groups joined in to protest.

“By imposing a censorship scheme unprecedented in any medium, the CDA would threaten what one lower court judge called the ‘never-ending world-wide conversation’ on the Internet,” said Ann Beeson, ACLU national staff attorney, in 1997.

By June of 1997, the Supreme Court had struck down the anti-indecency provisions of the CDA. But legally severed from the rest of the act, Section 230 survived.

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Reporter Em McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for the student newspaper. In addition to agency and freelance marketing experience, she has reported extensively on Section 230, big tech, and rural broadband access. She is a founding board member of Code Open Sesame, an organization that teaches computer programming skills to underprivileged children.

Section 230

Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case

Panelists disagreed over the merits of Section 230’s protections and the extent to which they apply.

Published

on

Screenshot of speakers at the Federalist Society webinar

WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

Continue Reading

Section 230

Supreme Court Seeks Biden Administration’s Input on Texas and Florida Social Media Laws

The court has not yet agreed to hear the cases, but multiple justices have commented on their importance.

Published

on

Photo of Solicitor General Elizabeth Prelogar courtesy of the U.S. Department of Justice

WASHINGTON, January 24, 2023 — The Supreme Court on Monday asked for the Joe Biden administration’s input on a pair of state laws that would prevent social media platforms from moderating content based on viewpoint.

The Republican-backed laws in Texas and Florida both stem from allegations that tech companies are censoring conservative speech. The Texas law would restrict platforms with at least 50 million users from removing or demonetizing content based on “viewpoint.” The Florida law places significant restrictions on platforms’ ability to remove any content posted by members of certain groups, including politicians.

Two trade groups — NetChoice and the Computer & Communications Industry Association — jointly challenged both laws, meeting with mixed results in appeals courts. They, alongside many tech companies, argue that the law would violate platforms’ First Amendment right to decide what speech to host.

Tech companies also warn that the laws would force them to disseminate objectionable and even dangerous content. In an emergency application to block the Texas law from going into effect in May, the trade groups wrote that such content could include “Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders,”

The Supreme Court has not yet agreed to hear the cases, but multiple justices have commented on the importance of the issue.

In response to the emergency application in May, Justice Samuel Alito wrote that the case involved “issues of great importance that will plainly merit this Court’s review.” However, he disagreed with the court’s decision to block the law pending review, writing that “whether applicants are likely to succeed under existing law is quite unclear.”

Monday’s request asking Solicitor General Elizabeth Prelogar to weigh in on the cases allows the court to put off the decision for another few months.

“It is crucial that the Supreme Court ultimately resolve this matter: it would be a dangerous precedent to let government insert itself into the decisions private companies make on what material to publish or disseminate online,” CCIA President Matt Schruers said in a statement. “The First Amendment protects both the right to speak and the right not to be compelled to speak, and we should not underestimate the consequences of giving government control over online speech in a democracy.”

The Supreme Court is still scheduled to hear two other major content moderation cases next month, which will decide whether Google and Twitter can be held liable for terrorist content hosted on their respective platforms.

Continue Reading

Expert Opinion

Luke Lintz: The Dark Side of Banning TikTok on College Campuses

Campus TikTok bans could have negative consequences for students.

Published

on

The author of this expert opinion is Luke Lintz, co-owner of HighKey Enterprises LLC

In recent months, there have been growing concerns about the security of data shared on the popular social media app TikTok. As a result, a number of colleges and universities have decided to ban the app from their campuses.

While these bans may have been implemented with the intention of protecting students’ data, they could also have a number of negative consequences.

Banning TikTok on college campuses could also have a negative impact on the inter-accessibility of the student body. Many students use the app to connect with others who share their interests or come from similar backgrounds. For example, international students may use the app to connect with other students from their home countries, or students from underrepresented groups may use the app to connect with others who share similar experiences.

By denying them access to TikTok, colleges may be inadvertently limiting their students’ ability to form diverse and supportive communities. This can have a detrimental effect on the student experience, as students may feel isolated and disconnected from their peers. Additionally, it can also have a negative impact on the wider college community, as the ban may make it more difficult for students from different backgrounds to come together and collaborate.

Furthermore, by banning TikTok, colleges may also be missing out on the opportunity to promote diverse events on their campuses. The app is often used by students to share information about events, clubs and other activities that promote diversity and inclusivity. Without this platform, it may be more difficult for students to learn about these initiatives and for organizations to reach a wide audience.

Lastly, it’s important to note that banning TikTok on college campuses could also have a negative impact on the ability of college administrators to communicate with students. Many colleges and universities have started to use TikTok as a way to connect with students and share important information and updates. The popularity of TikTok makes it the perfect app for students to use to reach large, campus-wide audiences.

TikTok also offers a unique way for college administrators to connect with students in a more informal and engaging way. TikTok allows administrators to create videos that are fun, creative and relatable, which can help to build trust and to heighten interaction with students. Without this platform, it may be more difficult for administrators to establish this type of connection with students.

Banning TikTok from college campuses could have a number of negative consequences for students, including limiting their ability to form diverse and supportive communities, missing out on future opportunities and staying informed about what’s happening on campus. College administrators should consider the potential consequences before making a decision about banning TikTok from their campuses.

Luke Lintz is a successful businessman, entrepreneur and social media personality. Today, he is the co-owner of HighKey Enterprises LLC, which aims to revolutionize social media marketing. HighKey Enterprises is a highly rated company that has molded its global reputation by servicing high-profile clients that range from A-listers in the entertainment industry to the most successful one percent across the globe. This piece is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending