Connect with us

Big Tech

As Google’s CEO Testifies Before Congress, Conservatives Stew About Social Media ‘Censorship’

Published

on

WASHINGTON, December 11, 2018 — Republicans and conservative activists used Tuesday’s House Judiciary Committee hearing with Google CEO Sundar Pichai to revive claims that large technology companies are biased against them. But these same activists appear unwilling to accept any result that doesn’t validate their claim.

One prominent Republican who has raised claims of censorship by large technology companies is President Donald Trump, who in August took to Twitter to accuse Google of deliberately manipulating search results to highlight negative stories about him,

“Google search results for “Trump News” shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD. Fake CNN is prominent. Republican/Conservative & Fair Media is shut out. Illegal?” Trump wrote, suggesting without evidence that 96 percent of search results for “Trump News” came from what he called “National Left-Wing Media.”

“Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation will be addressed!” he added.

At the Tuesday hearing, Pichai rebutted all charges that political bias was present in Google search results. “Our products are build without any bias,” he told legislators – repeatedly.

Facebook is another target of the conservatives’ ire

But at the Google hearing, the critics kept coming. Another frequent target of conservatives’ censorship accusations is Facebook, which has been struggling to harden its platform against foreign disinformation in the two years since Russia’s Internet Research Agency used it to reach millions of Americans with pro-Trump, anti-Clinton messaging.

Some changes Facebook has implemented in the aftermath of 2016 have focused on the algorithm it uses to choose what content users see on the site, others have focused on combatting disinformation and hoax websites masquerading as news organizations.

While Facebook says those changes were made in order to favor original content posted by users’ friends and family, and to elevate local news and fact-checked, trusted news outlets, conservative bloggers say they’ve been targeted for censorship as part of a coordinated campaign by Democrats and their allies.

A ‘conspiracy theorist’ states his case

Jim Hoft, who runs the popular pro-Trump blog the Gateway Pundit, said that the evidence of Facebook’s censorship can be found in his and other conservative news sites’ traffic numbers.

“The top conservative sites on the right noticed this last year, but this year, my traffic has gone from thirty-three percent…to about three percent today. Our little blog had a huge influence on the election, and since that time our advertisers have been targeted, we’ve had two junk lawsuits against us, and our Facebook traffic has been shut down,” Hoft said in an interview.

Hoft was referencing defamation lawsuits against him by a student his site misidentified as a mass shooter, and a State Department employee whom Hoft suggested was a “deep state shill” after he allowed news organizations to use his video of white nationalist James Alex Fields, Jr. using his car to murder anti-racist counter protester Heather Heyer at the 2017 Unite the Right rally in Charlottesville, Virginia. Fields was sentenced to life in prison on Tuesday.

“I would argue that this is a coordinated attack on conservative sites,’ Hoft said.

When asked who he thought was “coordinating” the “attack,” Hoft replied: “Call me a conspiracy theorist, but I wish I knew.”

Facebook also denies political bias in the administration of its platform

As with Google, Facebook executives have repeatedly denied any bias in how the company runs its platform or enforces its terms of service. Still, but they have attempted to acknowledge conservatives’ concerns by commissioning an external audit of the entire company to determine whether there is any inadvertent political bias in its operations.

The company retained the services of Sen. Jon Kyl, R-Ariz., then in retirement from politics and a partner at the law firm of Covington and Burling, to conduct the audit.

Kyl returned to the Senate in September after Arizona Governor Doug Ducey tapped him to fill the seat left by death of his onetime colleague, Sen. John McCain, R-Ariz.

A Facebook spokesperson told BroadbandBreakfast that the audit is ongoing under the direction of other Covington and Burling attorneys, and that the company looks forward to sharing the results.

But to Hoft, the results may not matter if they don’t confirm his suspicions.

“If the senator finds there is no bias by Facebook, then no, I won’t accept the results,” he said.

Diamond and Silk aren’t waiting for the results of any social media audit

Two other prominent pro-Trump activists who said they wouldn’t accept any result that doesn’t show pervasive bias against conservatives are Lynette Hardaway and Rochelle Richardson, the pro-Trump YouTube personalities who go by the name Diamond and Silk online.

Hardaway and Richardson found themselves in the spotlight in April 2018 when they told the House Judiciary Committee that Facebook had allegedly suspended them for being “unsafe to the community.”

During their congressional testimony, Hardaway and Richardson pointed to exchanges with Facebook staff explaining other disciplinary actions the company took against them as evidence of bias, and also cited low viewership numbers for their videos as further evidence of censorship.

In a phone interview with BroadbandBreakfast, the pair continued to cite low viewership numbers as proof of a censorship conspiracy.

“Why is it that somebody with 500,000 followers was able to garner 5,000,000 views, and we have 1,200,000 and we were only able to garner 13,000 on our video?” Hardaway asked during an interview with BroadbandBreakfast.

“There’s something not right with this algorithm system, this algorithm system is discriminating against conservative voices, and they’re censoring and stifling conservative voices,” she added.

Richardson, her “Silk” counterpart, suggested that it was only conservatives who’ve been affected by Facebook’s changes.

“I do not see liberals complaining about any kind of censorship,” she said.

A new ‘Fairness Doctrine’ for the internet?

Despite the myriad conservative activists and politicians claiming systematic bias and calling for regulation, experts haven’t found anything of the sort, and most remain skeptical of the need for what would amount to a renewed “fairness doctrine” — the former Federal Communications Commission regulation that required television and radio stations to give equal time to both sides when discussing controversial issues — for the internet.

One expert who testified in April alongside Hardaway and Richardson, TechFreedom President Berin Szoka, said the idea espoused by some conservatives that government should step in to regulate social media companies is “insane.”

“I don’t think they have any clue what that would mean,” he said, comparing it to the Fairness Doctrine, which was scrapped during the Reagan administration.

That policy, which Szoka called “hugely problematic and impractical,” was long reviled by conservatives and has been defunct since the Reagan administration.

What some conservatives want for social media “goes way, way beyond” what was required by the Fairness Doctrine, Szoka said, because it would treat companies like Facebook as government actors, meaning they could not restrict speech in any way.

Szoka added that conservatives are wary of Facebook’s attempts to crack down on fake accounts, hoaxes, fabricated news and disinformation because they often benefit from such tactics.

“This is entirely about narrow political interests and short term political interests,” he said.

“Right now the fake news industry is ginning up the American id for the Republican Party. It is not surprising, therefore, that Republicans have suddenly done a complete 180 degree turn on everything they used to say about the Fairness Doctrine, and how the First Amendment doesn’t apply to private actors, just doesn’t apply to the Internet. Instead, they now want a Fairness Doctrine for the internet on steroids,” he said.

‘Popehat’ blog author weighs into the controversy, against social media terms of service

Some conservatives cite the First Amendment when suggesting that technology and social media companies shouldn’t be able to enforce terms of service against political speech. But those who accuse Facebook and others of censorship “pretend that companies like Facebook don’t have free speech rights, and they do,” said Ken White, a former federal prosecutor and free speech advocate who frequently writes about First Amendment issues on the “Popehat” blog.

“Facebook and Twitter and all these other platforms have a right of free expression and free association, and part of that is them creating the type of platform they want to offer to their customers, which may not include me, but that’s their right,” he said.

White said that while some Republicans are using congressional hearings to push the idea that conservatives are being censored, from all the evidence he’s seen, there is no censorship taking place.

(Photo of Google CEO Sundar Pichai being sworn in for his testimony before the House Judiciary Committee on December 11, 2018, taken by Drew Clark.)

Andrew Feinberg was the White House Correspondent and Managing Editor for Breakfast Media. He rejoined BroadbandBreakfast.com in late 2016 after working as a staff writer at The Hill and as a freelance writer. He worked at BroadbandBreakfast.com from its founding in 2008 to 2010, first as a Reporter and then as Deputy Editor. He also covered the White House for Russia's Sputnik News from the beginning of the Trump Administration until he was let go for refusing to use White House press briefings to promote conspiracy theories, and later documented the experience in a story which set off a chain of events leading to Sputnik being forced to register under the Foreign Agents Registration Act. Andrew's work has appeared in such publications as The Hill, Politico, Communications Daily, Washington Internet Daily, Washington Business Journal, The Sentinel Newspapers, FastCompany.TV, Mashable, and Silicon Angle.

Section 230

Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case

Panelists disagreed over the merits of Section 230’s protections and the extent to which they apply.

Published

on

Screenshot of speakers at the Federalist Society webinar

WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

Continue Reading

Section 230

Supreme Court Seeks Biden Administration’s Input on Texas and Florida Social Media Laws

The court has not yet agreed to hear the cases, but multiple justices have commented on their importance.

Published

on

Photo of Solicitor General Elizabeth Prelogar courtesy of the U.S. Department of Justice

WASHINGTON, January 24, 2023 — The Supreme Court on Monday asked for the Joe Biden administration’s input on a pair of state laws that would prevent social media platforms from moderating content based on viewpoint.

The Republican-backed laws in Texas and Florida both stem from allegations that tech companies are censoring conservative speech. The Texas law would restrict platforms with at least 50 million users from removing or demonetizing content based on “viewpoint.” The Florida law places significant restrictions on platforms’ ability to remove any content posted by members of certain groups, including politicians.

Two trade groups — NetChoice and the Computer & Communications Industry Association — jointly challenged both laws, meeting with mixed results in appeals courts. They, alongside many tech companies, argue that the law would violate platforms’ First Amendment right to decide what speech to host.

Tech companies also warn that the laws would force them to disseminate objectionable and even dangerous content. In an emergency application to block the Texas law from going into effect in May, the trade groups wrote that such content could include “Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders,”

The Supreme Court has not yet agreed to hear the cases, but multiple justices have commented on the importance of the issue.

In response to the emergency application in May, Justice Samuel Alito wrote that the case involved “issues of great importance that will plainly merit this Court’s review.” However, he disagreed with the court’s decision to block the law pending review, writing that “whether applicants are likely to succeed under existing law is quite unclear.”

Monday’s request asking Solicitor General Elizabeth Prelogar to weigh in on the cases allows the court to put off the decision for another few months.

“It is crucial that the Supreme Court ultimately resolve this matter: it would be a dangerous precedent to let government insert itself into the decisions private companies make on what material to publish or disseminate online,” CCIA President Matt Schruers said in a statement. “The First Amendment protects both the right to speak and the right not to be compelled to speak, and we should not underestimate the consequences of giving government control over online speech in a democracy.”

The Supreme Court is still scheduled to hear two other major content moderation cases next month, which will decide whether Google and Twitter can be held liable for terrorist content hosted on their respective platforms.

Continue Reading

Expert Opinion

Luke Lintz: The Dark Side of Banning TikTok on College Campuses

Campus TikTok bans could have negative consequences for students.

Published

on

The author of this expert opinion is Luke Lintz, co-owner of HighKey Enterprises LLC

In recent months, there have been growing concerns about the security of data shared on the popular social media app TikTok. As a result, a number of colleges and universities have decided to ban the app from their campuses.

While these bans may have been implemented with the intention of protecting students’ data, they could also have a number of negative consequences.

Banning TikTok on college campuses could also have a negative impact on the inter-accessibility of the student body. Many students use the app to connect with others who share their interests or come from similar backgrounds. For example, international students may use the app to connect with other students from their home countries, or students from underrepresented groups may use the app to connect with others who share similar experiences.

By denying them access to TikTok, colleges may be inadvertently limiting their students’ ability to form diverse and supportive communities. This can have a detrimental effect on the student experience, as students may feel isolated and disconnected from their peers. Additionally, it can also have a negative impact on the wider college community, as the ban may make it more difficult for students from different backgrounds to come together and collaborate.

Furthermore, by banning TikTok, colleges may also be missing out on the opportunity to promote diverse events on their campuses. The app is often used by students to share information about events, clubs and other activities that promote diversity and inclusivity. Without this platform, it may be more difficult for students to learn about these initiatives and for organizations to reach a wide audience.

Lastly, it’s important to note that banning TikTok on college campuses could also have a negative impact on the ability of college administrators to communicate with students. Many colleges and universities have started to use TikTok as a way to connect with students and share important information and updates. The popularity of TikTok makes it the perfect app for students to use to reach large, campus-wide audiences.

TikTok also offers a unique way for college administrators to connect with students in a more informal and engaging way. TikTok allows administrators to create videos that are fun, creative and relatable, which can help to build trust and to heighten interaction with students. Without this platform, it may be more difficult for administrators to establish this type of connection with students.

Banning TikTok from college campuses could have a number of negative consequences for students, including limiting their ability to form diverse and supportive communities, missing out on future opportunities and staying informed about what’s happening on campus. College administrators should consider the potential consequences before making a decision about banning TikTok from their campuses.

Luke Lintz is a successful businessman, entrepreneur and social media personality. Today, he is the co-owner of HighKey Enterprises LLC, which aims to revolutionize social media marketing. HighKey Enterprises is a highly rated company that has molded its global reputation by servicing high-profile clients that range from A-listers in the entertainment industry to the most successful one percent across the globe. This piece is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending