Connect with us

Big Tech

A Short History of Online Free Speech, Part I: The Communications Decency Act Is Born

Published

on

Photo of Chuck Grassley in April 2011 by Gage Skidmore used with permission

WASHINGTON, August 19, 2019 — Despite all the sturm und drang surrounding Section 230 of the Communications Decency Act today, the measure was largely ignored when first passed into law 23 years ago. A great deal of today’s discussion ignores the statute’s unique history and purposes as part of the short-lived CDA.

In this four-part series, Broadband Breakfast reviews the past with an eye toward current controversies and the future of online free speech.

This article looks at content moderation on early online services, and how that fueled concern about indecency in general. On Tuesday, we’ll look at how Section 230 is similar to and different from America’s First Amendment legacy.

On Wednesday, in Part III, Broadband Breakfast revisits the reality and continuing mythology surrounding the “Fairness Doctrine.” Does it or has it ever applied online? And finally, on Thursday, we’ll envision what the future holds for the legal treatment of “hate speech.”

While most early chat boards did not moderate, Prodigy did — to its peril

The early days of the internet were dominated by online service providers such as America Online, Delphi, CompuServe and Prodigy. CompuServe did not engage in any form of content moderation, whereas Prodigy positioned itself as a family-friendly alternative by enforcing content guidelines and screening offensive language.

It didn’t take long for both platforms to be sued for defamation. In the 1991 case Cubby v. CompuServe, the federal district court in New York ruled that CompuServe could not be held liable for third party content of which it had no knowledge, similar to a newsstand or library.

But in 1995, the New York supreme court ruled in Stratton Oakmont v. Prodigy that the latter platform had taken on liability for all posts simply by attempting to moderate some, constituting editorial control.

“That such control is not complete…does not minimize or eviscerate the simple fact that Prodigy has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards,” the court wrote.

Prodigy had more than two million subscribers, and they collectively generated 60,000 new postings per day, far more than the platform could review on an individual basis. The decision gave them no choice but to either do that or forgo content moderation altogether.

Many early supporters of the internet criticized the ruling from a business perspective, warning that penalizing online platforms for attempting to moderate content would incentivize the option of not moderating at all. The resulting platforms would be less useable, and by extension, less successful.

The mid-1990s seemed to bring a cultural crises of online indecency

But an emerging cultural crisis also drove criticism of the Stratton Oakmont court’s decision. As a myriad of diverse content was suddenly becoming available to anyone with computer access, parents and lawmakers were becoming panicked about the new accessibility of indecent and pornographic material, especially to minors.

A Time Magazine cover from just two months after the decision depicted a child with bulging eyes and dropped jaw, illuminated by the ghastly light of a computer screen. Underneath a bold title reading “cyberporn” in all caps, an ominous headline declared the problem to be “pervasive and wild.”

And then it posed the question that was weighing heavily on certain members of Congress: “Can we protect our kids — and free speech?”

The foreboding study behind the cover story, which was entered into the congressional record by Sen. Chuck Grassley, R-Iowa, was found to be deeply flawed and Time quickly backpedaled. But the societal panic over the growing accessibility of cyberporn continued.

Thus was born the Communications Decency Act, meant to address what Harvard Law Professor Howard Zittrain called a “change in reality.” The law made it illegal to knowingly display or transmit obscene or indecent content online if such content would be accessible by minors.

Challenges in keeping up with the sheer volume of indecent content online

However, some members of Congress felt that government enforcement would not be able to keep up with the sheer volume of indecent content being generated online, rendering private sector participation necessary.

This prompted Reps. Ron Wyden, D-Ore., and Chris Cox, R-Calif., to introduce an amendment to the CDA ensuring that providers of an interactive computer service would not be held liable for third-party content, thus allowing them to moderate with impunity.

Section 230 — unlike what certain politicians have claimed in recent months — held no promise of neutrality. It was simply meant to protect online Good Samaritans trying to screen offensive material from a society with deep concerns about the internet’s potential impact on morality.

“We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see,” Cox told his fellow representatives.

“Not even a federal internet censorship army would give our government the power to keep offensive material out of the hands of children who use the new interactive media,” Wyden said. Such a futile effort would “make the Keystone Cops look like crackerjack crime-fighters,” he added, referencing comedically incompetent characters from an early 1900s comedy.

The amendment was met with bipartisan approval on the House floor and passed in a 420–4 vote. The underlying Communications Decency Act was much more controversial. Still, it was signed into law with the Telecommunications Act of 1996.

Although indecency on radio and TV broadcasts have long been subject to regulation by the Federal Communications Commission, the CDA was seen as an assault on the robust world of free speech that was emerging on the global internet.

Passage of the CDA as part of the Telecom Act was met with online outrage.

The following 48 hours saw thousands of websites turn their background color to black in protest as tech companies and activist organizations joined in angry opposition to the new law.

Critics argued that not only were the terms “indecent” and “patently offensive” ambiguous, it was not technologically or economically feasible for online platforms and businesses to screen out minors.

The American Civil Liberties Union filed suit against the law, and other civil liberties organizations and technology industry groups joined in to protest.

“By imposing a censorship scheme unprecedented in any medium, the CDA would threaten what one lower court judge called the ‘never-ending world-wide conversation’ on the Internet,” said Ann Beeson, ACLU national staff attorney, in 1997.

By June of 1997, the Supreme Court had struck down the anti-indecency provisions of the CDA. But legally severed from the rest of the act, Section 230 survived.

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Development Associate Emily McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for campus publication Student Life. She is a founding board member of Code Open Sesame, an organization that teaches computer skills to underprivileged children in six cities across Southern California.

Big Tech

Washington’s Antitrust Push Could Create ‘Chilling Effect’ on Startups, Observers Say

There is concern that an FTC focused on ‘big is bad’ will stunt economic growth in the future.

Published

on

FTC Chairwoman Lina Khan

WASHINGTON, September 23, 2021 – Advocates for less government encroachment on big technology companies are warning that antitrust is being weaponized for political ends that may end up placing a “chilling effect” on innovative businesses.

The Institute for Policy Innovation held a web event Wednesday to discuss antitrust and the modern economy. Panelists noted their concern that antitrust law may be welded with political aims that will ultimately create a precedent whereby the federal government will stifle innovators who get too big.

Jessica Melugin, the director of the Center for Technology and Innovation, said technology companies could see what’s happening in Washington – with lots of talk of breaking up companies deemed too big – and be uncertain of the future.

She noted that growing companies largely seek one of two things to make it big: grow to file an initial public offering, where the company’s shares are publicly traded, or wait until a large company buys you out. She said talk emanating from the White House and Washington generally about regulating the industry could deter larger companies from acquiring them, and onerous financial regulations could put a damper on IPO dreams.

“If you start robbing companies of other smaller companies they purchased, it’s going to give a lot of entrepreneurs and a lot of funders in Silicon Valley pause,” Melugin said. “If another path to success gets blocked – the IPO is now harder, and now acquisitions are a little bit questionable…that’s a chilling effect.”

President Joe Biden has made a number of appointments to key positions that is bringing more attention on Big Tech, including known Amazon critic Lina Khan to chair the Federal Trade Commission, which recently filed an amended case against Facebook for alleged anticompetitive practices. He also appointed antitrust expert and Google critic Jonathan Kanter as assistant attorney general in the Justice Department’s antitrust division.

FTC could set a bad precedent if focus is ‘big is bad’ 

Christopher Koopman, the executive director at the Center for Growth and Opportunity at Utah State University, said he’s concerned about the precedent Khan could set for big companies.

He said the odds are that once Khan starts, she will continue down “this path of ‘big is bad’ because that’s a prior that she has and she’s continued to operate on her entire professional career. It just so happens that the focus of this is on tech companies.

“We may be building a regulatory apparatus that will continue to burrow a hole right down the middle of the American economy before we even have a chance to ask if that’s really what we want,” Koopman added. “We just have to recognize that it doesn’t matter, really, who is running the FTC – once we tell the FTC to go break up big companies, they’re going to go break up big companies.”

And the concern for Carl Szabo, vice president and general counsel of lobby group NetChoice, which advocates for less government regulation on the future of technology, is not just a domestic problem, but an international one, too.

“I really do worry about us shanking our innovation and essentially giving a free kick to our competitors and that seems to be what we’re doing,” Szabo said. “Right now, we lead the world.

“This is an international issue, this is a national issue, and we really need to – whether Conservative or Democrat – as Americans we need to see the forest from the trees. And if we want to put corporations ahead of competitors and think those are good democratic values, go ahead and do it.

The House has before it six antitrust bills targeting big technology companies, which passed the chamber’s judiciary committee in June. The goal of the bills is to rein in the power of Big Tech through new antitrust liability provisions, including new merger and acquisition review, measures to prevent anticompetitive activity, and providing government enforcers more power to break-up or separate big businesses.

Federal Communications Commissioner Brendan Carr said earlier this year that Big Tech has too much influence and power, citing the ability of Apple and Google to remove applications like controversial chat website Parler from its app stores.  Carr recently recommended that Big Tech contribute to the Universal Service Fund, which supports broadband expansion in low-income and rural areas of the country, because these companies benefit from broadband.

Continue Reading

Big Tech

Tread Carefully on Tech Platform Data Portability, Conference Hears

Politico panel debates merit of allowing tech platform users to migrate data freely.

Published

on

Public Knowledge's Charlotte Slaiman before a Senate committee on September 21, 2021.

WASHINGTON, September 23, 2021 – Panelists debated Monday the merits of forcing companies to allow users to migrate their data from one platform to another, with some lauding the proposal and others cautioning Congress not to stifle innovators by taking a blanket approach.

The Politico Tech Summit hosted a panel discussing legislation before the House – H.R. 3849 – that would force companies to allow users to move their data from one platform to another. The idea behind the concept of data portability is to instigate competition by reducing the barrier for users to use other services that they would otherwise avoid because they cannot take their contacts, connections, and photos with them to the new platform.

Experts say such a portability mandate would be welcomed by younger internet platforms that are competing to grow their networks, but admonished by larger firms like Facebook and TikTok, who would argue that they grew their networks organically and don’t wield any uncompetitive pressures by keeping their networks private.

“[Anti-trust legislation] is really about opening up markets for innovative competitors to enter,” said Charlotte Slaiman, competition policy director for public interest group Public Knowledge.

“Network effects are very powerful in many of these dominant digital platforms. Network effects means it’s very difficult for a person to leave a network. Even if you’re upset with Facebook, you don’t want to leave because of your one thousand connections or whatever.

“If you think about it from the perspective of an entrepreneur, they’re facing this problem times a million users,” Slaiman added. “The sources of funding know it, the venture capitalists know…interoperability is about addressing those network effects.” Interoperability is the extent to which a platform’s infrastructure works with others, which can facilitate data portability.

And more competition is emerging in the online platform space. For example, sites like Parler and Vero have emerged as social networking alternatives to the likes of Facebook, while video sites like Rumble and Locals have emerged as alternatives to YouTube.

Slaiman argues that platforms should compete on the features and user experiences they offer, not on owning a pool of users and profiles.

Slaiman testified similarly before the Senate Judiciary Committee’s Subcommittee on Competition Policy, Antitrust, and Consumer Rights on Tuesday.

Caution for portability legislation

Zach Graves, head of public policy for the think tank Lincoln Network, said there are a lot of cases where mandated portability “makes a lot of sense.

“If you look at the telecom context, you know the fact that you can take your phone number and port it to a different carrier. But we should approach this with caution. There are tradeoffs… I think there’s sort of a category error in how they’re constructing this that big is bad and that’s how we should regulate it,” he said.

“I would prefer a more sector specific approach,” Graves added. “If we’re talking about online retail, we should regulate online retail. If we’re talking about online ads, we should regulate online ads. The fact that we’re saying these companies are big and we should scrutinize them and give them a special framework I don’t agree with.”

Steve DelBianco, CEO of lobby group NetChoice, which pushes for a tech future free from onerous government regulation, was more blunt.

“The interoperability mandate will be a disaster for competition, for privacy and for data security,” he said. “There’s a complete difference between phone number portability and data portability compared to having interoperability where you open a hole into your application which means that any competitor can see data that violates your own privacy requirements. [That creates] security problems.

“People can join multiple social networks at the same time. The theory of network effects really falls down on this.”

Continue Reading

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending