Connect with us

Social Media

Researcher: Algorithms Cannot Be Blamed for Disinformation, But They Contribute to It

Columbia University researcher shared her perspectives at event hosted by The Atlantic that featured former President Barack Obama.

Published

on

Photo of Camille François, Karrie Karahalios and Casey Newton

WASHINGTON, April 14, 2022 – A researcher from Columbia University says that algorithms such as the ones Facebook uses cannot be blamed for causing mass disinformation, but that they must still be discussed as contributors to the phenomenon.

Camille François discussed the matter during a conference on disinformation and how it affects democracy hosted by The Atlantic magazine and the University of Chicago Institute of Politics featuring a discussion on combating disinformation with former President Barack Obama.

Through the coronavirus pandemic, disinformation surged online related to topics such as vaccines and the origins of the virus, and as war wages on in Ukraine the intentional spread of false information online has proven a chief tactic of Russia during its invasion campaign.

François spoke on a panel with Karrie Karahalios, a computer science professor at the University of Illinois Urbana-Champaign, that was focused on the power of algorithms.

During the discussion, Karahalios commented on proposed legislation in the House of Representatives to remove Section 230 protections for online content which is promoted algorithmically – thus subjecting them to legal liability. She stated that content regulation truly must be done on a case-by-case basis rather than applying a one-size-fits-all approach to regulation in all settings.

She expressed concern over some negative effects of algorithmic technology than others, highlighting as problematic the use of algorithmic technology to detect employment fraud in Michigan which led to false accusations of fraud made against several individuals.

Karahalios said results produced by algorithms such as these must be able to be contested due to their flawed nature.

Similarly, François stated that when algorithms are used for very serious practices such as criminal sentencing there must be transparency about how they are used.

To find potential solutions to some of the issues algorithms create, Karahalios suggests that data related to algorithms such as on how Facebook promotes certain advertisements be made available to a wide variety of researchers.

Earlier at the conference, Barack Obama said that he “underestimated the degree to which democracies” are vulnerable to misinformation and disinformation.

The former president said that the U.S. must mitigate the influence of dangerous online misinformation through a mix of regulation and industry standards.

Free Speech

Panel Hears Opposing Views on Content Moderation Debate

Some agreed there is egregious information that should be downranked on search platforms.

Published

on

Screenshot of Renee DiResta, research manager at Stanford Internet Observatory.

WASHINGTON, September 14, 2022 – Panelists wrangled over how technology platforms should handle content moderation at an event hosted by the Lincoln Network Friday, with one arguing that search engines should neutralize misinformation that cause direct, “tangible” harms and another advocating an online content moderation standard that doesn’t discriminate on viewpoints.

Debate about what to do with certain content on technology platforms has picked up steam since former President Donald Trump was removed last year from platforms including Facebook and Twitter for allegedly inciting the January 6, 2021, storming of the Capitol.

Search engines generally moderate content algorithmically, prioritizing certain results over others. Most engines, like Google, prioritize results from institutions generally considered to be credible, such as universities and government agencies.

That can be a good thing, said Renee DiResta, research manager at Stanford Internet Observatory. If search engines allow scams or medical misinformation to headline search results, she argued, “tangible” material or physical harms will result.

The internet pioneered communications from “one-to-many” broadcast media – e.g., television and radio – to a “many-to-many” model, said DiResta. She argued that “many-to-many” interactions create social frictions and make possible the formation of social media mobs.

At the beginning of the year, Georgia Republic representative Marjorie Taylor Greene was permanently removed from Twitter for allegedly spreading Covid-19 misinformation, the same reason Kentucky Senator Rand Paul was removed from Alphabet Inc.’s YouTube.

Lincoln Network senior fellow Antonio Martinez endorsed a more permissive content moderation strategy that – excluding content that incites imminent, lawless action – is tolerant of heterodox speech. “To think that we can epistemologically or even technically go in and establish capital-T Truth at scale is impossible,” he said.

Trump has said to be committed to a platform of open speech with the creation of his social media website Truth Social. Other platforms, such as social media site Parler and video-sharing website Rumble, have purported to allow more speech than the incumbents. SpaceX CEO Elon Musk previously committed to buying Twitter because of its policies prohibiting certain speech, though he now wants out of that commitment.

Alex Feerst, CEO of digital content curator Murmuration Labs, said that free-speech aphorisms – such as, “The cure for bad speech is more speech” – may no longer hold true given the volume of speech enabled by the internet.

Continue Reading

Social Media

Americans Should Look to Filtration Software to Block Harmful Content from View, Event Hears

One professor said it is the only way to solve the harmful content problem without encroaching on free speech rights.

Published

on

Photo of Adam Neufeld of Anti-Defamation League, Steve Delbianco of NetChoice, Barak Richman of Duke University, Shannon McGregor of University of North Carolina (left to right)

WASHINGTON, July 21, 2022 – Researchers at an Internet Governance Forum event Thursday recommended the use of third-party software that filters out harmful content on the internet, in an effort to combat what they say are social media algorithms that feed them content they don’t want to see.

Users of social media sites often don’t know what algorithms are filtering the information they consume, said Steve DelBianco, CEO of NetChoice, a trade association that represents the technology industry. Most algorithms function to maximize user engagement by manipulating their emotions, which is particularly worrisome, he said.

But third-party software, such as Sightengine and Amazon’s Rekognition – which moderate what users see by bypassing images and videos that the user selects as objectionable – could act in place of other solutions to tackle disinformation and hate speech, said Barak Richman, professor of law and business at Duke University.

Richman argued that this “middleware technology” is the only way to solve this universal problem without encroaching on free speech rights. He suggested Americans in these technologies – that would be supported by popular platforms including Facebook, Google, and TikTok – to create the buffer between harmful algorithms and the user.

Such technologies already exist in limited applications that offer less personalization and accuracy in filtering, said Richman. But the market demand needs to increase to support innovation and expansion in this area.

Americans across party lines believe that there is a problem with disinformation and hate speech, but disagree on the solution, added fellow panelist Shannon McGregor, senior researcher at the Center for Information, Technology, and Public Life at the University of North Carolina.

The conversation comes as debate continues regarding Section 230, a provision in the Communications Decency Act that protects technology platforms from being liable for content their users post. Some say Section 230 only protects “neutral platforms,” while others claim it allows powerful companies to ignore user harm. Experts in the space disagree on the responsibility of tech companies to moderate content on their platforms.

Continue Reading

Free Speech

Experts Reflect on Supreme Court Decision to Block Texas Social Media Bill

Observers on a Broadband Breakfast panel offered differing perspectives on the high court’s decision.

Published

on

Parler CPO Amy Peikoff

WASHINGTON, June 2, 2022 – Experts hosted by Broadband Breakfast Wednesday were split on what to make of  the Supreme Court’s 5-4 decision to reverse a lower court order lifting a ban on a Texas social media law that would have made it illegal for certain large platforms to crack down on speech they deem reprehensible.

The decision keeps the law from taking affect until a full determination is made by a lower court.

Parler CPO Amy Peikoff

During a Broadband Live Online event on Wednesday, Ari Cohn, free speech counsel for tech lobbyist TechFreedom, argued that the bill “undermines the First Amendment to protect the values of free speech.

“We have seen time and again over the course of history that when you give the government power to start encroaching on editorial decisions [it will] never go away, it will only grow stronger,” he cautioned. “It will inevitably be abused by whoever is in power.”

Nora Benavidez, senior counsel and director of digital justice and civil rights for advocate Free Press, agreed with Cohn. “This is a state effort to control what private entities do,” she said Wednesday. “That is unconstitutional.

“When government attempts to invade into private action that is deeply problematic,” Benavidez continued. “We can see hundreds and hundreds of years of examples of where various countries have inserted themselves into private actions – that leads to authoritarianism, that leads to censorship.”

Different perspectives

Principal at McCollough Law Firm Scott McCollough said Wednesday  that he believed the law should have been allowed to stand.

“I agree the government should not be picking and choosing who gets to speak and who does not,” he said. “The intent behind the Texas statute was to prevent anyone from being censored – regardless of viewpoint, no matter what [the viewpoint] is.”

McCollough argued that this case was about which free speech values supersede the other – “those of the platforms, or those of the people who feel that they are being shut out from what is today the public square.

“In the end it will be a court that acts, and the court is also the state,” McCollough added. “So, in that respect, the state would still be weighing in on who wins and who loses – who gets to speak and who does not.”

Chief policy officer of social media platform Parler Amy Peikoff said Wednesday that her primary concern was “viewpoint discrimination in favor of the ruling elite.”

Peikoff was particularly concerned about coordination between state agencies and social media platforms to “squelch certain viewpoints.”

Peikoff clarified that she did not believe that the Texas law was the best vehicle to address these concerns, however, stating instead that lawsuits – preferably private ones – be used to remove the “censorious cancer,” rather than entangling a government entity in the matter.

“This cancer grows out of a partnership between government and social media to squelch discussion about certain viewpoints and perspectives.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, June 1, 2022, 12 Noon ET – BREAKING NEWS EVENT! – The Supreme Court, Social Media and the Culture Wars

The Supreme Court on Tuesday blocked a Texas law that would ban large social media companies from removing posts based on the views they express. Join us for this breaking news event of Broadband Breakfast Live Online in which we discuss the Supreme Court, social media and the culture wars.

Panelists:

  • Scott McCollough, Attorney, McCollough Law Firm
  • Amy Peikoff, Chief Policy Officer, Parler
  • Ari Cohn, Free Speech Counsel, TechFreedom
  • Nora Benavidez, Senior Counsel and Director of Digital Justice and Civil Rights at Free Press
  • Drew Clark (presenter and host), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Amy Peikoff is the Chief Policy Officer of Parler. After completing her Ph.D., she taught at universities (University of Texas, Austin, University of North Carolina, Chapel Hill, United States Air Force Academy) and law schools (Chapman, Southwestern), publishing frequently cited academic articles on privacy law, as well as op-eds in leading newspapers across the country on a range of issues. Just prior to joining Parler, she founded and was President of the Center for the Legalization of Privacy, which submitted an amicus brief in United States v. Facebook in 2019.

Ari Cohn is Free Speech Counsel at TechFreedom. A nationally recognized expert in First Amendment law, he was previously the Director of the Individual Rights Defense Program at the Foundation for Individual Rights in Education (FIRE), and has worked in private practice at Mayer Brown LLP and as a solo practitioner, and was an attorney with the U.S. Department of Education’s Office for Civil Rights. Ari graduated cum laude from Cornell Law School, and earned his Bachelor of Arts degree from the University of Illinois at Urbana-Champaign.

Nora Benavidez manages Free Press’s efforts around platform and media accountability to defend against digital threats to democracy. She previously served as the director of PEN America’s U.S. Free Expression Programs, where she guided the organization’s national advocacy agenda on First Amendment and free-expression issues, including press freedom, disinformation defense and protest rights. Nora launched and led PEN America’s media-literacy and disinformation-defense program. She also led the organization’s groundbreaking First Amendment lawsuit, PEN America v. Donald Trump, to hold the former president accountable for his retaliation against and censorship of journalists he disliked.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

Photo of the Supreme Court from September 2020 by Aiva.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending