Connect with us

Social Media

Misinformation Expert Warns About the Great Risks of Political Tampering In the 2020 Election

Published

on

Photo of Craig Silverman contemplating celebrity trolls by David Jelke

WASHINGTON February 19, 2020— A “very complicated media infrastructure” that is both corrupt and creative is developing in the world of political misinformation, warned Craig Silverman, the media editor for BuzzFeed in a talk at George Washington University’s Institute for Data, Democracy, and Politics this Tuesday.

There has been an increase in attention and concern about misinformation among the general public since the 2016 Election, said Silverman. However, the public has generally not heard of a new crop of small-time – yet equally disruptive – players in the fake news economy. Indeed, he warned that voters are more at risk of political tampering in 2020 than they were in 2016.

These range from individual actors, “black PR” firms, and whole governments. Silverman listed a litany of examples.

A former NASDAQ analyst was determined by a BuzzFeed investigation to be the actor behind a slew of an imposter local media websites.

These websites flood their pages with outdated news stories and post broken links in attempts to give the appearance of legitimacy. For example, an imposter website for Albany, New York, garnered more clicks than any other local Albany publication, and advertisers who can’t tell the difference divert revenue to the pockets of those imposters.

Silverman related how political opponents of Philippines president Rodrigo Duterte have adopted the same duplicitous social media tactics that Duterte used to dupe voters and rise to power as a strongman. Filipino politicians seemed to have chosen to adapt rather than fight, stated Silverman.

Most strikingly, of the 27 attempts by “black-hat affiliate marketers” to influence politics dating back to 2011 in Bahrain, 19 of them have occurred in 2019. In 2018, only one occurred. Even though these kinds of campaigns have yet to occur in the U.S., Silverman warned that the time when that happens is not far off.

One particularly egregious example was that of an Israeli “black PR” firm called Archimedes Group that tried to influence several African elections. Their motto was “molding reality to our clients’ wishes.”

This concept of a fluid morality reappeared in Silverman’s analysis of both far-right and far-left misinformation in the U.S.

Silverman documented the rise of fringe conspiracy blog QAnon and its declaration of a global pedophile cabal. “If you want to live in those communities, you can do that,” lamented Silverman.

Silverman also dissected an incident last week where Pete Buttigieg’s campaign communications adviser was falsely accused by Bernie Sanders’ supporters of impersonating a pro-Buttigieg Nigerian man on Twitter.

Free Speech

Panel Hears Opposing Views on Content Moderation Debate

Some agreed there is egregious information that should be downranked on search platforms.

Published

on

Screenshot of Renee DiResta, research manager at Stanford Internet Observatory.

WASHINGTON, September 14, 2022 – Panelists wrangled over how technology platforms should handle content moderation at an event hosted by the Lincoln Network Friday, with one arguing that search engines should neutralize misinformation that cause direct, “tangible” harms and another advocating an online content moderation standard that doesn’t discriminate on viewpoints.

Debate about what to do with certain content on technology platforms has picked up steam since former President Donald Trump was removed last year from platforms including Facebook and Twitter for allegedly inciting the January 6, 2021, storming of the Capitol.

Search engines generally moderate content algorithmically, prioritizing certain results over others. Most engines, like Google, prioritize results from institutions generally considered to be credible, such as universities and government agencies.

That can be a good thing, said Renee DiResta, research manager at Stanford Internet Observatory. If search engines allow scams or medical misinformation to headline search results, she argued, “tangible” material or physical harms will result.

The internet pioneered communications from “one-to-many” broadcast media – e.g., television and radio – to a “many-to-many” model, said DiResta. She argued that “many-to-many” interactions create social frictions and make possible the formation of social media mobs.

At the beginning of the year, Georgia Republic representative Marjorie Taylor Greene was permanently removed from Twitter for allegedly spreading Covid-19 misinformation, the same reason Kentucky Senator Rand Paul was removed from Alphabet Inc.’s YouTube.

Lincoln Network senior fellow Antonio Martinez endorsed a more permissive content moderation strategy that – excluding content that incites imminent, lawless action – is tolerant of heterodox speech. “To think that we can epistemologically or even technically go in and establish capital-T Truth at scale is impossible,” he said.

Trump has said to be committed to a platform of open speech with the creation of his social media website Truth Social. Other platforms, such as social media site Parler and video-sharing website Rumble, have purported to allow more speech than the incumbents. SpaceX CEO Elon Musk previously committed to buying Twitter because of its policies prohibiting certain speech, though he now wants out of that commitment.

Alex Feerst, CEO of digital content curator Murmuration Labs, said that free-speech aphorisms – such as, “The cure for bad speech is more speech” – may no longer hold true given the volume of speech enabled by the internet.

Continue Reading

Social Media

Americans Should Look to Filtration Software to Block Harmful Content from View, Event Hears

One professor said it is the only way to solve the harmful content problem without encroaching on free speech rights.

Published

on

Photo of Adam Neufeld of Anti-Defamation League, Steve Delbianco of NetChoice, Barak Richman of Duke University, Shannon McGregor of University of North Carolina (left to right)

WASHINGTON, July 21, 2022 – Researchers at an Internet Governance Forum event Thursday recommended the use of third-party software that filters out harmful content on the internet, in an effort to combat what they say are social media algorithms that feed them content they don’t want to see.

Users of social media sites often don’t know what algorithms are filtering the information they consume, said Steve DelBianco, CEO of NetChoice, a trade association that represents the technology industry. Most algorithms function to maximize user engagement by manipulating their emotions, which is particularly worrisome, he said.

But third-party software, such as Sightengine and Amazon’s Rekognition – which moderate what users see by bypassing images and videos that the user selects as objectionable – could act in place of other solutions to tackle disinformation and hate speech, said Barak Richman, professor of law and business at Duke University.

Richman argued that this “middleware technology” is the only way to solve this universal problem without encroaching on free speech rights. He suggested Americans in these technologies – that would be supported by popular platforms including Facebook, Google, and TikTok – to create the buffer between harmful algorithms and the user.

Such technologies already exist in limited applications that offer less personalization and accuracy in filtering, said Richman. But the market demand needs to increase to support innovation and expansion in this area.

Americans across party lines believe that there is a problem with disinformation and hate speech, but disagree on the solution, added fellow panelist Shannon McGregor, senior researcher at the Center for Information, Technology, and Public Life at the University of North Carolina.

The conversation comes as debate continues regarding Section 230, a provision in the Communications Decency Act that protects technology platforms from being liable for content their users post. Some say Section 230 only protects “neutral platforms,” while others claim it allows powerful companies to ignore user harm. Experts in the space disagree on the responsibility of tech companies to moderate content on their platforms.

Continue Reading

Free Speech

Experts Reflect on Supreme Court Decision to Block Texas Social Media Bill

Observers on a Broadband Breakfast panel offered differing perspectives on the high court’s decision.

Published

on

Parler CPO Amy Peikoff

WASHINGTON, June 2, 2022 – Experts hosted by Broadband Breakfast Wednesday were split on what to make of  the Supreme Court’s 5-4 decision to reverse a lower court order lifting a ban on a Texas social media law that would have made it illegal for certain large platforms to crack down on speech they deem reprehensible.

The decision keeps the law from taking affect until a full determination is made by a lower court.

Parler CPO Amy Peikoff

During a Broadband Live Online event on Wednesday, Ari Cohn, free speech counsel for tech lobbyist TechFreedom, argued that the bill “undermines the First Amendment to protect the values of free speech.

“We have seen time and again over the course of history that when you give the government power to start encroaching on editorial decisions [it will] never go away, it will only grow stronger,” he cautioned. “It will inevitably be abused by whoever is in power.”

Nora Benavidez, senior counsel and director of digital justice and civil rights for advocate Free Press, agreed with Cohn. “This is a state effort to control what private entities do,” she said Wednesday. “That is unconstitutional.

“When government attempts to invade into private action that is deeply problematic,” Benavidez continued. “We can see hundreds and hundreds of years of examples of where various countries have inserted themselves into private actions – that leads to authoritarianism, that leads to censorship.”

Different perspectives

Principal at McCollough Law Firm Scott McCollough said Wednesday  that he believed the law should have been allowed to stand.

“I agree the government should not be picking and choosing who gets to speak and who does not,” he said. “The intent behind the Texas statute was to prevent anyone from being censored – regardless of viewpoint, no matter what [the viewpoint] is.”

McCollough argued that this case was about which free speech values supersede the other – “those of the platforms, or those of the people who feel that they are being shut out from what is today the public square.

“In the end it will be a court that acts, and the court is also the state,” McCollough added. “So, in that respect, the state would still be weighing in on who wins and who loses – who gets to speak and who does not.”

Chief policy officer of social media platform Parler Amy Peikoff said Wednesday that her primary concern was “viewpoint discrimination in favor of the ruling elite.”

Peikoff was particularly concerned about coordination between state agencies and social media platforms to “squelch certain viewpoints.”

Peikoff clarified that she did not believe that the Texas law was the best vehicle to address these concerns, however, stating instead that lawsuits – preferably private ones – be used to remove the “censorious cancer,” rather than entangling a government entity in the matter.

“This cancer grows out of a partnership between government and social media to squelch discussion about certain viewpoints and perspectives.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, June 1, 2022, 12 Noon ET – BREAKING NEWS EVENT! – The Supreme Court, Social Media and the Culture Wars

The Supreme Court on Tuesday blocked a Texas law that would ban large social media companies from removing posts based on the views they express. Join us for this breaking news event of Broadband Breakfast Live Online in which we discuss the Supreme Court, social media and the culture wars.

Panelists:

  • Scott McCollough, Attorney, McCollough Law Firm
  • Amy Peikoff, Chief Policy Officer, Parler
  • Ari Cohn, Free Speech Counsel, TechFreedom
  • Nora Benavidez, Senior Counsel and Director of Digital Justice and Civil Rights at Free Press
  • Drew Clark (presenter and host), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Amy Peikoff is the Chief Policy Officer of Parler. After completing her Ph.D., she taught at universities (University of Texas, Austin, University of North Carolina, Chapel Hill, United States Air Force Academy) and law schools (Chapman, Southwestern), publishing frequently cited academic articles on privacy law, as well as op-eds in leading newspapers across the country on a range of issues. Just prior to joining Parler, she founded and was President of the Center for the Legalization of Privacy, which submitted an amicus brief in United States v. Facebook in 2019.

Ari Cohn is Free Speech Counsel at TechFreedom. A nationally recognized expert in First Amendment law, he was previously the Director of the Individual Rights Defense Program at the Foundation for Individual Rights in Education (FIRE), and has worked in private practice at Mayer Brown LLP and as a solo practitioner, and was an attorney with the U.S. Department of Education’s Office for Civil Rights. Ari graduated cum laude from Cornell Law School, and earned his Bachelor of Arts degree from the University of Illinois at Urbana-Champaign.

Nora Benavidez manages Free Press’s efforts around platform and media accountability to defend against digital threats to democracy. She previously served as the director of PEN America’s U.S. Free Expression Programs, where she guided the organization’s national advocacy agenda on First Amendment and free-expression issues, including press freedom, disinformation defense and protest rights. Nora launched and led PEN America’s media-literacy and disinformation-defense program. She also led the organization’s groundbreaking First Amendment lawsuit, PEN America v. Donald Trump, to hold the former president accountable for his retaliation against and censorship of journalists he disliked.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

Photo of the Supreme Court from September 2020 by Aiva.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending