Connect with us

Big Tech

A Short History of Online Free Speech, Part I: The Communications Decency Act Is Born

Published

on

Photo of Chuck Grassley in April 2011 by Gage Skidmore used with permission

WASHINGTON, August 19, 2019 — Despite all the sturm und drang surrounding Section 230 of the Communications Decency Act today, the measure was largely ignored when first passed into law 23 years ago. A great deal of today’s discussion ignores the statute’s unique history and purposes as part of the short-lived CDA.

In this four-part series, Broadband Breakfast reviews the past with an eye toward current controversies and the future of online free speech.

This article looks at content moderation on early online services, and how that fueled concern about indecency in general. On Tuesday, we’ll look at how Section 230 is similar to and different from America’s First Amendment legacy.

On Wednesday, in Part III, Broadband Breakfast revisits the reality and continuing mythology surrounding the “Fairness Doctrine.” Does it or has it ever applied online? And finally, on Thursday, we’ll envision what the future holds for the legal treatment of “hate speech.”

While most early chat boards did not moderate, Prodigy did — to its peril

The early days of the internet were dominated by online service providers such as America Online, Delphi, CompuServe and Prodigy. CompuServe did not engage in any form of content moderation, whereas Prodigy positioned itself as a family-friendly alternative by enforcing content guidelines and screening offensive language.

It didn’t take long for both platforms to be sued for defamation. In the 1991 case Cubby v. CompuServe, the federal district court in New York ruled that CompuServe could not be held liable for third party content of which it had no knowledge, similar to a newsstand or library.

But in 1995, the New York supreme court ruled in Stratton Oakmont v. Prodigy that the latter platform had taken on liability for all posts simply by attempting to moderate some, constituting editorial control.

“That such control is not complete…does not minimize or eviscerate the simple fact that Prodigy has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards,” the court wrote.

Prodigy had more than two million subscribers, and they collectively generated 60,000 new postings per day, far more than the platform could review on an individual basis. The decision gave them no choice but to either do that or forgo content moderation altogether.

Many early supporters of the internet criticized the ruling from a business perspective, warning that penalizing online platforms for attempting to moderate content would incentivize the option of not moderating at all. The resulting platforms would be less useable, and by extension, less successful.

The mid-1990s seemed to bring a cultural crises of online indecency

But an emerging cultural crisis also drove criticism of the Stratton Oakmont court’s decision. As a myriad of diverse content was suddenly becoming available to anyone with computer access, parents and lawmakers were becoming panicked about the new accessibility of indecent and pornographic material, especially to minors.

A Time Magazine cover from just two months after the decision depicted a child with bulging eyes and dropped jaw, illuminated by the ghastly light of a computer screen. Underneath a bold title reading “cyberporn” in all caps, an ominous headline declared the problem to be “pervasive and wild.”

And then it posed the question that was weighing heavily on certain members of Congress: “Can we protect our kids — and free speech?”

The foreboding study behind the cover story, which was entered into the congressional record by Sen. Chuck Grassley, R-Iowa, was found to be deeply flawed and Time quickly backpedaled. But the societal panic over the growing accessibility of cyberporn continued.

Thus was born the Communications Decency Act, meant to address what Harvard Law Professor Howard Zittrain called a “change in reality.” The law made it illegal to knowingly display or transmit obscene or indecent content online if such content would be accessible by minors.

Challenges in keeping up with the sheer volume of indecent content online

However, some members of Congress felt that government enforcement would not be able to keep up with the sheer volume of indecent content being generated online, rendering private sector participation necessary.

This prompted Reps. Ron Wyden, D-Ore., and Chris Cox, R-Calif., to introduce an amendment to the CDA ensuring that providers of an interactive computer service would not be held liable for third-party content, thus allowing them to moderate with impunity.

Section 230 — unlike what certain politicians have claimed in recent months — held no promise of neutrality. It was simply meant to protect online Good Samaritans trying to screen offensive material from a society with deep concerns about the internet’s potential impact on morality.

“We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see,” Cox told his fellow representatives.

“Not even a federal internet censorship army would give our government the power to keep offensive material out of the hands of children who use the new interactive media,” Wyden said. Such a futile effort would “make the Keystone Cops look like crackerjack crime-fighters,” he added, referencing comedically incompetent characters from an early 1900s comedy.

The amendment was met with bipartisan approval on the House floor and passed in a 420–4 vote. The underlying Communications Decency Act was much more controversial. Still, it was signed into law with the Telecommunications Act of 1996.

Although indecency on radio and TV broadcasts have long been subject to regulation by the Federal Communications Commission, the CDA was seen as an assault on the robust world of free speech that was emerging on the global internet.

Passage of the CDA as part of the Telecom Act was met with online outrage.

The following 48 hours saw thousands of websites turn their background color to black in protest as tech companies and activist organizations joined in angry opposition to the new law.

Critics argued that not only were the terms “indecent” and “patently offensive” ambiguous, it was not technologically or economically feasible for online platforms and businesses to screen out minors.

The American Civil Liberties Union filed suit against the law, and other civil liberties organizations and technology industry groups joined in to protest.

“By imposing a censorship scheme unprecedented in any medium, the CDA would threaten what one lower court judge called the ‘never-ending world-wide conversation’ on the Internet,” said Ann Beeson, ACLU national staff attorney, in 1997.

By June of 1997, the Supreme Court had struck down the anti-indecency provisions of the CDA. But legally severed from the rest of the act, Section 230 survived.

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Social Media

Americans Should Look to Filtration Software to Block Harmful Content from View, Event Hears

One professor said it is the only way to solve the harmful content problem without encroaching on free speech rights.

Published

on

Photo of Adam Neufeld of Anti-Defamation League, Steve Delbianco of NetChoice, Barak Richman of Duke University, Shannon McGregor of University of North Carolina (left to right)

WASHINGTON, July 21, 2022 – Researchers at an Internet Governance Forum event Thursday recommended the use of third-party software that filters out harmful content on the internet, in an effort to combat what they say are social media algorithms that feed them content they don’t want to see.

Users of social media sites often don’t know what algorithms are filtering the information they consume, said Steve DelBianco, CEO of NetChoice, a trade association that represents the technology industry. Most algorithms function to maximize user engagement by manipulating their emotions, which is particularly worrisome, he said.

But third-party software, such as Sightengine and Amazon’s Rekognition – which moderate what users see by bypassing images and videos that the user selects as objectionable – could act in place of other solutions to tackle disinformation and hate speech, said Barak Richman, professor of law and business at Duke University.

Richman argued that this “middleware technology” is the only way to solve this universal problem without encroaching on free speech rights. He suggested Americans in these technologies – that would be supported by popular platforms including Facebook, Google, and TikTok – to create the buffer between harmful algorithms and the user.

Such technologies already exist in limited applications that offer less personalization and accuracy in filtering, said Richman. But the market demand needs to increase to support innovation and expansion in this area.

Americans across party lines believe that there is a problem with disinformation and hate speech, but disagree on the solution, added fellow panelist Shannon McGregor, senior researcher at the Center for Information, Technology, and Public Life at the University of North Carolina.

The conversation comes as debate continues regarding Section 230, a provision in the Communications Decency Act that protects technology platforms from being liable for content their users post. Some say Section 230 only protects “neutral platforms,” while others claim it allows powerful companies to ignore user harm. Experts in the space disagree on the responsibility of tech companies to moderate content on their platforms.

Continue Reading

Big Tech

Surveillance Capitalism a Symptom of Web-Dependent Companies, Not Ownership

Former Google executive Richard Whitt critiqued Ben Tarnoff’s argument in ‘Internet for the People’ during Gigabit Libraries discussion.

Published

on

Photo of Ben Tarnoff, co-founder of magazine Logic and the author of “Internet for the People”

July 15, 2022 – A former Google executive  pushed back against a claim that the privatization of broadband infrastructure has created the world’s current data and privacy concerns, instead suggesting that it’s the companies that rely on the web that have helped fuel the problem.

Richard Whitt, president of technology non-profit GLIA Foundation and former employee of Google, argued that while the World Wide Web is rife with problems, the internet infrastructure underlying the web remains fundamentally sound.

Whitt was responding to claims made by Ben Tarnoff, a journalist and founder of Logic Magazine, at the Libraries in Response event on July 8. Tarnoff argued – as he does in his recent book, “Internet for the People” – that the privatization of broadband infrastructure in the 1990s has allowed the use and commodification of personal data for profit to flourish (known as surveillance capitalism).

The discussion took place during the Gigabit Libraries Network’s series “Libraries in Response.” The session was titled “If the Internet is Broken, How Can Libraries Help Fix it?”

Privatization, Tarnoff claims, has raised such issues as polarization of ideologies and the “annihilation of our privacy.” As a result, he said, the American people are losing trust in tech companies that “rule the internet.”

Whitt responded that the internet is working well based on the protocols, standardized rules for routing and addressing packets of data to travel across networks, derived at the onset of the internet.

The World Wide Web, a system built on the internet to allow communication using easy-to-understand graphical user interfaces, allowed for browsers and other applications to emerge, which have since perpetuated surveillance capitalism into the governing approach of the web that it is today, said Whitt, suggesting it’s not ownership of the hard infrastructure that’s the problem.

The advertising market that encourages surveillance extraction, analysis and manipulation is, and will continue to be, profitable, Whitt continued.

The discussion follows a Pew Research Center study that found that only half of Americans believe tech companies have a positive effect in 2019 compared to a seventy-one percent in 2015.

Continue Reading

Big Tech

American Innovation and Choice Online Act Has Panelists Divided on Small Business Impact

The bill is intended to prohibit product preferences on tech platforms, with some saying it could harm small companies dependent on those platforms.

Published

on

Panel at CSIS event on Thursday

WASHINGTON, July 6, 2022 – Observers are still divided about the effect on small business of legislation that is intended to keep large technology platforms from giving preference to their own products over others.

The Center for Strategic and International Studies hosted experts last month to discuss the American Innovation and Choice Online Act, which was introduced in January. The event heard both support for the bill, as well as concern that it could negatively impact smaller businesses that rely on the larger platforms.

“Existing antitrust law is not going to be enough to rein in the power of the largest tech platforms,” Charlotte Slaiman, competition policy director at public interest group Public Knowledge, said, adding the AICOA is very important for small business competition “to get a fair shot.”

“Fundamentally this is a really important…for competition because this protects small companies that are potential competitors against one of these large platforms,” she added.

Krisztian Katona, vice president of global competition and regulatory policy at the Computer & Communications Industry Association, however, said that after performing a cost-benefit analysis of AICOA, he expects the legislation will hurt business competition.

He said that the legislation would increase operating costs for smaller companies and force these companies to reduce the cost of their services. He predicts that close to 100 companies by 2030 would be negatively impacted by the legislation if it becomes law.

Others agree with Katona. A report in March by the Small Business and Entrepreneurship Council said small business owners felt the AICOA could be detrimental to them, saying it could increase prices. Meanwhile Michael Petricone, senior vice president of the Consumer Technology Association, said in June that small businesses would be affected the most by big tech regulation because they depend on those platforms.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending