Connect with us

Free Speech

Part III: The GOP Wants to Kill the Fairness Doctrine, Then Applies It to the Internet

Published

on

Photo of Ted Cruz from February 2018 by Gage Skidmore used with permission

WASHINGTON, August 21, 2019 — Questions of political neutrality and social media bias have been at the forefront of the ongoing debate over the Communications Decency Act’s Section 230. Some of these claims are frequently compared to another controversial law: The Fairness Doctrine.

“The idea that government should police the ‘neutrality’ of websites is, in effect, a Fairness Doctrine for the Internet,” said TechFreedom President Berin Szóka.

The Republican Party has a long history of opposition to the Fairness Doctrine. Indeed, it was under President Ronald Reagan’s Federal Communications Commission, the principle was abjured in 1987.

Yet Republican opposition runs so deep, in fact, that the official GOP Platform still calls for “an end to the so-called Fairness Doctrine.”

Given the intensity of Republican opposition to the doctrine for generations, any similar proposal coming from the right would be ironic.

And yet, in a startling break from party history, recent months have seen several prominent Republican politicians do just that, claiming that legal protections for online platforms should be conditioned on their politically neutrality.

What was the Fairness Doctrine?

In the ongoing debate over Section 230, it is important to take into account the effects of similar measures throughout history.

First implemented by the FCC in 1949, the Fairness Doctrine required broadcast licensees to “adequately cover issues of public importance” and include coverage of all the “various positions taken by responsible groups.”

The doctrine was upheld by the Supreme Court two decades later in Red Lion Broadcasting Co. v. FCC, a decision based upon the premise that public airwaves were limited, and therefore scarce.

“A license permits broadcasting, but the licensee has no constitutional right to…monopolize a radio frequency to the exclusion of his fellow citizens,” Justice Byron White wrote. Without government regulation, he said, “the medium would be of little use because of the cacophony of competing voices, none of which could be clearly and predictably heard.”

By contrast, in the 1984 case Miami Herald Publishing Co. v. Tornillo, the court created a clear demarcation between broadcast transmissions and the First Amendment rights of print publishers. It unanimously ruled that “government-enforced right of access inescapably dampens the vigor and limits the variety of public debate.”

The FCC was also turning against the doctrine, releasing in 1985 a report identifying several of the law’s weaknesses. In spite of its original purpose to encourage diverse viewpoints, the agency wrote, “we fear that in operation it may have the paradoxical effect of actually inhibiting the expression of a wide spectrum of opinion on controversial issues of public importance.”

The requirement “inextricably involves the Commission in the dangerous task of evaluating the merits of particular viewpoints,” the report continued.

The Reagan administration put the Fairness Doctrine to sleep

The conservative-libertarian alliance used to be staunchly opposed to the Fairness Doctrine and anything that would smack of bringing it back. They voiced concerns that broadcasters would be discouraged from addressing any issues that could possibly be considered controversial for fear of saying something that would trigger the law’s right of reply.

After a series of decisions and court-challenges about the application of the doctrine to teletex, a proto-internet type of transmission of words through broadcasting, the a Republican-majority FCC officially abolished it under Chairman Dennis Patrick. Congress passed legislation in an attempt to reinstate the Fairness Doctrine, but it was vetoed by Reagan and therefore died.

“We must not ignore the obvious intent of the First Amendment, which is to promote vigorous public debate and a diversity of viewpoints in the public forum as a whole, not in any particular medium, let alone in any particular journalistic outlet,” Reagan said.

“History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee,” he continued.

Indeed, conservative talk radio in the 1990s might not have arisen without the death of the Fairness Doctrine, as well as considerable fear-mongering about its potential — including another unsuccessful attempt to reinstate the doctrine by Congress in 1991.

Why is the 2016 GOP platform still calling for an end to the Fairness Doctrine?

Yet, in 2016, the official Republican Party platform calls for “an end to the so-called Fairness Doctrine.” In its place, the platform advocates for “free-market approaches to free speech unregulated by government” and supports the “repeal of federal restrictions…protecting political speech on the internet.”

But with seeming disregard for this position, multiple GOP senators have recently supported legislation that appears markedly similar to the doctrine.

In June, Sen. Josh Hawley, R-Mo., introduced a bill that would require major digital platforms to prove every two years to the Federal Trade Commission that their moderation practices were entirely neutral in order to receive Section 230 protections.

At a Senate Judiciary Subcommittee Hearing in July, Sen. Ted Cruz, R-Texas, claimed that if big tech could not provide “clear, compelling data and evidence” of their neutrality, “there’s no reason on earth why Congress should give them a special subsidy through Section 230.”

Applying a Fairness Doctrine to the internet would have severe consequences, said Szóka, in that platforms would likely respond to such a rule by simply “squelching all political discussion.”

“The fact that the current occupant of the White House has regularly threatened to use the courts against this critics, and in fact has used the courts to enforce non-disclosure agreements to silence those he does not want to speak, should give great pause to anyone considering empowering the government to force website operators to satisfy a standard so vague as ‘neutrality’ regarding ‘controversial’ matters (a category they cannot define in advance),” Szóka warned.

With the Fairness Doctrine dead, the First Amendment now covers almost all mediums of transmission

The Fairness Doctrine never would have survived First Amendment scrutiny were it not for the still-not-overturned holding in Red Lion. But the case is less and less relevant. Reno v. ACLU, the 1997 Supreme Court case overturning the underlying Communications Decency Act, made clear that the internet was not subject to the restrictive view of free speech that governed the broadcast media.

That precedent on free speech has been reaffirmed repeatedly by the Supreme Court, including again, including in Brown v. EMA.

In that 2010 case, the Supreme court noted that “whatever the challenges of applying the Constitution to ever-advancing technology, ‘the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary’ when a new and different medium for communication appears.”

Moreover, whether or not online platforms are politically neutral, courts have made it clear that the government cannot require speakers to give up First Amendment rights in exchange for a benefit, such as Section 230 protections.

In Perry v. Sindermann, the Supreme Court declared that “even though a person has no ‘right’ to a valuable government benefit, and even though the government may deny him the benefit for any number of reasons, there are some reasons upon which the government may not rely.”

These reasons include denying a person benefits “on a basis that infringes his constitutionally protected interest, especially his interest in freedom of speech.”

Section I: The Communications Decency Act is Born

Section II: How Section 230 Builds on and Supplements the First Amendment

Section III: What Does the Fairness Doctrine Have to Do With the Internet?

Section IV: As Hate Speech Proliferates Online, Critics Want to See and Control Social Media’s Algorithms

Development Associate Emily McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for campus publication Student Life. She is a founding board member of Code Open Sesame, an organization that teaches computer skills to underprivileged children in six cities across Southern California.

Free Speech

Former GOP Congressman and UK MP Highlight Dangers of Disinformation and Urge Regulation

Will Hurd and Member of Parliament Damien Collins say disinformation on social media platforms a worry in midterm elections.

Published

on

Photo of Will Hurd from March 2016 by Paul Morigi used with permission

WASHINGTON, January 11, 2022 – Former Republican Rep. Will Hurd said that disinformation campaigns could have a very concerning effect on the upcoming midterm elections.

He and the United Kingdom’s Member of Parliament Damien Collins  urged new measures to hold tech and social media companies accountable for disinformation.

Hurd particularly expressed concern about how disinformation sows doubts about the legitimacy of the elections and effective treatments to the COVID-19 virus. The consequences of being misinformed on these topics is quite significant, he and Collins said Tuesday during a webinar hosted by the Washington Post.

The Texan Hurd said that the American 2020 election was the most secure the nation has ever had, and yet disinformation around it led to the insurrection at the Capitol.

The British Collins agreed that democratic elections are particularly at risk. Some increased risk comes from ever-present disinformation around COVID and its effects on public health and politics. “A lack of regulation online has left too many people vulnerable to abuse, fraud, violence, and in some cases even loss of life,” he said.

In regulating tech and media companies, Collins said citizens are reliant on whistleblowers, investigative journalists, and self-serving reports from companies that manipulate their data.

Unless government gets involved, they said, the nation will remain ignorant of the spread of disinformation.

Tech companies need to increase their transparency, even though that is something they are struggling to do.

Yet big tech companies are constantly conducting research and surveillance on their audience, the performance of their services, and the effect of their platforms. Yet they fail to share this information with the public, and he said that the public has a right to know the conclusions of these companies’ research.

In addition to increasing transparency and accountability, many lawmakers are attempting to grapple with the spread of disinformation. Some propose various changes to Section 230 of the Telecom Act of 1996.

Hurd said that the issues surrounding Section 230 will not be resolved before the midterm elections, and he recommended that policy-makers take steps outside of new legislation.

For example, the administration of President Joe Biden could lead its own federal reaction to misinformation to help citizens differentiate between fact and fiction, said Hurd.

Continue Reading

Section 230

Greene, Paul Social Media Developments Resurface Section 230 Debate

Five days into the new year and two developments bring Section 230 protections back into focus.

Published

on

Georgia Republican Representative Marjorie Taylor Greene

WASHINGTON, January 5, 2022 – The departure of Republican Kentucky Senator Rand Paul from YouTube and the banning of Georgia Republican Representative Marjorie Taylor Greene from Twitter at the beginning of a new year has rekindled a still lit flame of what lawmakers will do about Section 230 protections for Big Tech.

Paul removed himself Monday from the video-sharing platform after getting two strikes on his channel for violating the platform’s rules on Covid-19 misinformation, saying he is “[denying] my content to Big Tech…About half of the public leans right. If we all took our messaging to outlets of free exchange, we could cripple Big Tech in a heartbeat.”

Meanwhile, Greene has been permanently suspended from Twitter following repeated violations of Twitter’s terms of service. She has previously been rebuked by both her political opponents and allies for spreading fake news and mis/disinformation since she was elected in 2020. Her rap sheet includes being accused of spreading conspiracy theories promoting white supremacy and antisemitism.

It was ultimately the spreading of Covid-19 misinformation that got Greene permanently banned from Twitter on Sunday. She had received at least three previous “strikes” related to Covid-19 misinformation, according to New York Times. Greene received a fifth strike on Sunday, which resulted in her account’s permanent suspension.

Just five days into the new year, Greene’s situation – and the quickly-followed move by Paul – has reignited the tinderbox that is Section 230 of the Communications Decency Act, which shields big technology platforms from any liability from posts by their users.

As it stands now, Twitter is well within its rights to delete or suspend the accounts of any person who violates its terms of service. The right to free speech that is protected by the First Amendment does not prevent a private corporation, such as Twitter, from enforcing their rules.

In response to her Tweets, Texas Republican Congressman Dan Crenshaw called Greene a “liar and an idiot.” His comments notwithstanding, Crenshaw, like many conservative legislators, has argued that social media companies have become an integral part of the public forum and thus should not have the authority to unilaterally ban or censor voices on their platforms.

Some states, such as Texas and Florida, have gone as far as making it illegal for companies to ban political figures. Though Florida’s bill was quickly halted in the courts, that did not stop Texas from trying to enact similar laws (though they were met with similar results).

Crenshaw himself has proposed federal amendments to Section 230 for any “interactive computer service” that generates $3 billion or more in annual revenue or has 300 million or more monthly users.

The bill – which is still being drafted and does not have an official designation – would allow users to sue social media platforms for the removal of legal content based on political views, gender, ethnicity, and race. It would also make it illegal for these companies to remove any legal, user generated content from their website.

Under Crenshaw’s bill, a company such as Facebook or Twitter could be compelled to host any legal speech – objectionable or otherwise – at the risk of being sued. This includes overtly racist, sexist, or xenophobic slurs and rhetoric. While a hosting website might be morally opposed to being party to such kinds of speech, if said speech is not explicitly illegal, it would thus be protected from removal.

While Crenshaw would amend Section 230, other conservatives have advocated for its wholesale repeal. Sen. Lindsey Graham, R-South Carolina, put forward Senate Bill 2972 which would do just that. If passed, the law would go into effect on the first day of 2024, with no replacement or protections in place to replace it.

Consequences of such legislation

This is a nightmare scenario for every company with an online presence that can host user generate content. If a repeal bill were to pass with no replacement legislation in place, every online company would suddenly become directly responsible for all user content hosted on their platforms.

With the repeal of Section 230, websites would default to being treated as publishers. If users upload illegal content to a website, it would be as if the company published the illegal content themselves.

This would likely exacerbate the issue of alleged censorship that Republicans are concerned about. The sheer volume of content generated on platforms like Reddit and YouTube would be too massive for a human moderating team to play a role in.

Companies would likely be forced to rely on heavier handed algorithms and bots to censor anything that could open them to legal liability.

Democratic views

Republicans are not alone in their criticism of Section 230, however. Democrats have also flirted with amending or abolishing Section 230, albeit for very different reasons.

Many Democrats believe that Big Tech uses Section 230 to deflect responsibility, and that if they are afforded protections by it, they will not adjust their content moderation policies to mitigate allegedly dangerous or hateful speech posted online by users with real-world consequences.

Some Democrats have written bills that would carve out numerous exemptions to Section 230. Some seek to address the sale of firearms online, others focus on the spread of Covid-19 misinformation.

Some Democrats have also introduced the Safe Tech Act, which would hold companies accountable for failing to “remove, restrict access to or availability of, or prevent dissemination of material that is likely to cause irreparable harm.”

The reality right now is that two parties are diametrically opposed on the issue of Section 230.

While Republicans believe there is unfair content moderation that disproportionately censors conservative voices, Democrats believe that Big Tech is not doing enough to moderate their content and keep users safe.

Continue Reading

Section 230

Experts Warn Against Total Repeal of Section 230

Panelists note shifting definition of offensive content.

Published

on

WASHINGTON, November 22, 2021 – Communications experts say action by Congress to essentially gut Section 230 would not truly solve any problems with social media.

Experts emphasized that it is not possible for platforms to remove from their site all content that people may believe to be dangerous. They argue that Section 230 of the Communications Decency Act, which shields platforms from legal liability with respect to what their users post, is necessary in at least some capacity.

During discussion between these experts at Broadband Breakfast’s Live Online Event on Wednesday, Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, who used to work as a content moderator, said that to a certain extent it is impossible for platforms to moderate speech that is “dangerous” because every person has differing opinions about what speech they consider to be dangerous. He says it is this ambiguity that Section 230 protects companies from.

Still, Feerst believes that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible. He believes that the effects of artificial intelligence’s use by platforms makes some degree of liability even more essential.

Particularly with the amount of online speech to be reviewed by moderators in the internet age, Feerst says the clear-cut moderation standards are too messy and expensive to be viable options.

Matt Gerst, vice president for legal and policy affairs at the Internet Association, and Shane Tews, nonresident senior fellow at the American Enterprise Institute, also say that while content moderation is complex, it is necessary. Scott McCollough, attorney at McCollough Law Firm, says large social media companies like Facebook are not the causes of all the problems with social media that are in the national spotlight right now, but rather that social features of today’s society, such as the extreme prevalence of conflict, are to blame for this focus on social media.

Proposals for change

Rick Lane, CEO of Iggy Ventures, proposes that reform of Section 230 should include a requirement for social media platforms to make very clear what content is and is not allowed on their sites. McCullough echoed this concern, saying that many moderation actions platforms take presently do not seem to be consistent with those platforms’ stated terms and conditions, and that individual states across the nation should be able to look at these instances on a case-by-case basis to determine whether platforms fairly apply their terms and conditions.

Feerst highlighted the nuance of this issue by saying that people’s definitions of “consistent” are naturally subjective, but agrees with McCullough that users who have content removed should be notified of such, as well as the reasoning for moderators’ action.

Lane also believes that rightfully included in the product of Section 230 reform will be a requirement for platforms to demonstrate a reasonable standard of care and moderate illegal and other extremely dangerous content on their sites. Tews generally agreed with Lane that such content moderation is complex, as she sees a separation between freedom of speech and illegal activity.

Gerst highlighted concerns from companies the Internet Association represents that government regulation coming from Section 230 reform will require widely varied platforms to standardize their operation approaches, diminishing innovation on the internet.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. You can watch the November 17, 2021, event on this page. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

Wednesday, November 17, 2021, 12 Noon ET — The Changing Nature of the Debate About Social Media and Section 230

Facebook is under fire as never before. In response, the social-networking giant has gone so far as to change its official name, to Meta (as in the “metaverse”). What are the broader concerns about social media beyond Facebook? How will concerns about Facebook’s practices spill over into other social media networks, and to debate about Section 230 of the Communications Act?

Panelists for this Broadband Breakfast Live Online session:

  • Scott McCullough, Attorney, McCullough Law Firm
  • Shane Tews, Nonresident Senior Fellow, American Enterprise Institute
  • Alex Feerst, Co-founder, Digital Trust & Safety Partnership
  • Rick Lane, CEO, Iggy Ventures
  • Matt Gerst, VP for Legal & Policy Affairs, Internet Association
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Shane Tews is a nonresident senior fellow at the American Enterprise Institute (AEI), where she works on international communications, technology and cybersecurity issues, including privacy, internet governance, data protection, 5G networks, the Internet of Things, machine learning, and artificial intelligence. She is also president of Logan Circle Strategies.

Alex Feerst is a lawyer and technologist focused on building systems that foster trust, community, and privacy. He leads Murmuration Labs, which helps tech companies address the risks and human impact of innovative products, and co-founded the Digital Trust & Safety Partnership, the first industry-led initiative to establish best practices for online trust and safety. He was previously Head of Legal and Head of Trust and Safety at Medium, General Counsel at Neuralink, and currently serves on the editorial board of the Journal of Online Trust & Safety, and as a fellow at Stanford University’s Center for Internet and Society.

Rick Lane is a tech policy expert, child safety advocate, and the founder and CEO of Iggy Ventures. Iggy advises and invests in companies and projects that can have a positive social impact. Prior to starting Iggy, Rick served for 15 years as the Senior Vice President of Government Affairs of 21st Century Fox.

Matt Gerst is the Vice President for Legal & Policy Affairs and Associate General Counsel at Internet Association, where he builds consensus on policy positions among IA’s diverse membership of companies that lead the internet industry. Most recently, Matt served as Vice President of Regulatory Affairs at CTIA, where he managed a diverse range of issues including consumer protection, public safety, network resiliency, and universal service. Matt received his J.D. from New York Law School, and he served as an adjunct professor of law in the scholarly writing program at the George Washington University School of Law.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

WATCH HERE, or on YouTubeTwitter and Facebook

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending