Connect with us

Social Media

Social Media Companies Can Block and Control Harmful Content Amidst Current Coronavirus Disinformation

Elijah Labby

Published

on

Photo of Ryan Calo, principal investigator at the Center for an Informed Public, courtesy of Cornell University

June 9, 2020 — Though disinformation is rampant online, there is still hope that social media companies can control it, said Ryan Calo, principal investigator at the Center for an Informed Public, in an interview on KUOW Public Radio Tuesday.

While social media platforms have recently seen vast amounts of false information about the coronavirus pandemic and national protests, social media companies like Facebook have been able to block similarly harmful content in the past, Calo said.

“You just don’t see online gambling advertisements the way you used to; you don’t see jihadi recruitment videos the way you used to,” he pointed out.

However, Calo claimed that companies must be placed under pressure in order for them to want to cut down on harmful content.

“What you see over time is that when a company is highly motivated to put an end to something…they’ve been really good at it,” he said.

Calo also argued for increased responsibility for those in positions of authority who knowingly share false or misleading material. If they recognize that the content they shared was misleading, he said, they should take steps to clarify and correct the mistake.

“The best practice is that they should come clean about it, and they should take a screenshot of it,” he said. “Delete the actual tweets so it can’t continue to propagate.”

Calo said that these measures are crucial in an age of uncertainty about social media platforms’ responsibility for misleading content on their websites.

These concerns reached a high point in late May when President Donald Trump tweeted that mail-in ballots will be “substantially fraudulent.”

“Mail boxes [sic] will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone living in the state, no matter who they are or how they got there, will get one,” he tweeted.

Twitter added warning labels to the tweets, saying that they were misleading and urging users to “Get the facts about mail-in ballots.”

In response, Trump signed an executive order attempting to roll back protections on Twitter and other platforms that choose to engage with content moderation.

However, Calo said that Twitter’s actions were not only constitutional but also part of the  cost of opting into their service in the first place.

“The First Amendment limits what the government can do, not what Twitter can do as a private company,” he said. “…The President can’t stop them from commenting on what he’s saying.”

The interview can be viewed here.

Elijah Labby was a Reporter with Broadband Breakfast. He was born in Pittsburgh, Pennsylvania and now resides in Orlando, Florida. He studies political science at Seminole State College, and enjoys reading and writing fiction (but not for Broadband Breakfast).

Antitrust

Section 230 Has Coddled Big Tech For Too Long, Says Co-Author of Book on Amazon

Co-author of “The Amazon Jungle” says Section 230 has allowed Big Tech to get away with far too much.

Derek Shumway

Published

on

"The Amazon Jungle" co-author Jason Boyce

June 9, 2020 — Though disinformation is rampant online, there is still hope that social media companies can control it, said Ryan Calo, principal investigator at the Center for an Informed Public, in an interview on KUOW Public Radio Tuesday.

While social media platforms have recently seen vast amounts of false information about the coronavirus pandemic and national protests, social media companies like Facebook have been able to block similarly harmful content in the past, Calo said.

“You just don’t see online gambling advertisements the way you used to; you don’t see jihadi recruitment videos the way you used to,” he pointed out.

However, Calo claimed that companies must be placed under pressure in order for them to want to cut down on harmful content.

“What you see over time is that when a company is highly motivated to put an end to something…they’ve been really good at it,” he said.

Calo also argued for increased responsibility for those in positions of authority who knowingly share false or misleading material. If they recognize that the content they shared was misleading, he said, they should take steps to clarify and correct the mistake.

“The best practice is that they should come clean about it, and they should take a screenshot of it,” he said. “Delete the actual tweets so it can’t continue to propagate.”

Calo said that these measures are crucial in an age of uncertainty about social media platforms’ responsibility for misleading content on their websites.

These concerns reached a high point in late May when President Donald Trump tweeted that mail-in ballots will be “substantially fraudulent.”

“Mail boxes [sic] will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone living in the state, no matter who they are or how they got there, will get one,” he tweeted.

Twitter added warning labels to the tweets, saying that they were misleading and urging users to “Get the facts about mail-in ballots.”

In response, Trump signed an executive order attempting to roll back protections on Twitter and other platforms that choose to engage with content moderation.

However, Calo said that Twitter’s actions were not only constitutional but also part of the  cost of opting into their service in the first place.

“The First Amendment limits what the government can do, not what Twitter can do as a private company,” he said. “…The President can’t stop them from commenting on what he’s saying.”

The interview can be viewed here.

Continue Reading

Social Media

Josh Hawley Wants To Break Up Big Tech And Revisit How Antitrust Matters Are Considered

Senator Josh Hawley talks Section 230, antitrust reform, and the Capitol riots.

Benjamin Kahn

Published

on

Josh Hawley, right, via Flickr

June 9, 2020 — Though disinformation is rampant online, there is still hope that social media companies can control it, said Ryan Calo, principal investigator at the Center for an Informed Public, in an interview on KUOW Public Radio Tuesday.

While social media platforms have recently seen vast amounts of false information about the coronavirus pandemic and national protests, social media companies like Facebook have been able to block similarly harmful content in the past, Calo said.

“You just don’t see online gambling advertisements the way you used to; you don’t see jihadi recruitment videos the way you used to,” he pointed out.

However, Calo claimed that companies must be placed under pressure in order for them to want to cut down on harmful content.

“What you see over time is that when a company is highly motivated to put an end to something…they’ve been really good at it,” he said.

Calo also argued for increased responsibility for those in positions of authority who knowingly share false or misleading material. If they recognize that the content they shared was misleading, he said, they should take steps to clarify and correct the mistake.

“The best practice is that they should come clean about it, and they should take a screenshot of it,” he said. “Delete the actual tweets so it can’t continue to propagate.”

Calo said that these measures are crucial in an age of uncertainty about social media platforms’ responsibility for misleading content on their websites.

These concerns reached a high point in late May when President Donald Trump tweeted that mail-in ballots will be “substantially fraudulent.”

“Mail boxes [sic] will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone living in the state, no matter who they are or how they got there, will get one,” he tweeted.

Twitter added warning labels to the tweets, saying that they were misleading and urging users to “Get the facts about mail-in ballots.”

In response, Trump signed an executive order attempting to roll back protections on Twitter and other platforms that choose to engage with content moderation.

However, Calo said that Twitter’s actions were not only constitutional but also part of the  cost of opting into their service in the first place.

“The First Amendment limits what the government can do, not what Twitter can do as a private company,” he said. “…The President can’t stop them from commenting on what he’s saying.”

The interview can be viewed here.

Continue Reading

Social Media

Oversight Board Upholds Trump’s Ban From Facebook

The Oversight Board has sent the decision back to Facebook management, criticizing it for setting a “standardless” penalty.

Benjamin Kahn

Published

on

June 9, 2020 — Though disinformation is rampant online, there is still hope that social media companies can control it, said Ryan Calo, principal investigator at the Center for an Informed Public, in an interview on KUOW Public Radio Tuesday.

While social media platforms have recently seen vast amounts of false information about the coronavirus pandemic and national protests, social media companies like Facebook have been able to block similarly harmful content in the past, Calo said.

“You just don’t see online gambling advertisements the way you used to; you don’t see jihadi recruitment videos the way you used to,” he pointed out.

However, Calo claimed that companies must be placed under pressure in order for them to want to cut down on harmful content.

“What you see over time is that when a company is highly motivated to put an end to something…they’ve been really good at it,” he said.

Calo also argued for increased responsibility for those in positions of authority who knowingly share false or misleading material. If they recognize that the content they shared was misleading, he said, they should take steps to clarify and correct the mistake.

“The best practice is that they should come clean about it, and they should take a screenshot of it,” he said. “Delete the actual tweets so it can’t continue to propagate.”

Calo said that these measures are crucial in an age of uncertainty about social media platforms’ responsibility for misleading content on their websites.

These concerns reached a high point in late May when President Donald Trump tweeted that mail-in ballots will be “substantially fraudulent.”

“Mail boxes [sic] will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone living in the state, no matter who they are or how they got there, will get one,” he tweeted.

Twitter added warning labels to the tweets, saying that they were misleading and urging users to “Get the facts about mail-in ballots.”

In response, Trump signed an executive order attempting to roll back protections on Twitter and other platforms that choose to engage with content moderation.

However, Calo said that Twitter’s actions were not only constitutional but also part of the  cost of opting into their service in the first place.

“The First Amendment limits what the government can do, not what Twitter can do as a private company,” he said. “…The President can’t stop them from commenting on what he’s saying.”

The interview can be viewed here.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending