Algorithms Can Assist With the ‘Infodemic’, But Have Limitations, Says Center for Data Innovation

July 9, 2020 — Social media algorithms can assist with detecting false or misleading content amid the coronavirus, but they have many limitations, said participants in a Center for Data Innovation webinar Wednesday. The event, titled “Can Algorithms Tackle the ‘Infodemic’?” saw participants discuss

Algorithms Can Assist With the ‘Infodemic’, But Have Limitations, Says Center for Data Innovation
Photo of Information Studies Professor Sarah Roberts courtesy of UCLA

July 9, 2020 — Social media algorithms can assist with detecting false or misleading content amid the coronavirus, but they have many limitations, said participants in a Center for Data Innovation webinar Wednesday.

The event, titled “Can Algorithms Tackle the ‘Infodemic’?” saw participants discuss the application of algorithmic tools in recognizing fraudulent claims online.

The coronavirus has proven to be a golden opportunity for online scammers, with some hawking fake cures for the illness or disseminating false information about safety guidelines.

Algorithms like Facebook’s can identify the most widely shared falsities, but Dr. Sarah Roberts, associate professor in the Department of Information Studies at UCLA, said that when it comes to misinformation, the algorithms often lack tact and nuance.

“It might know, for example, that a cat is a cat,” she said. “But it’s not because it has a cultural and social and historical or biological sense of what a cat… is. It’s because it has a massive database of shapes and other mass data.”

Social media companies employ algorithms to deal with a massive amount of data that far surpasses their ability to control with human moderation. And while there are still individual moderators that deal with appeals to automated decisions or respond to content reports, Roberts said that “it’s always been the aspiration to fully automate this process.”

“There’s simply not enough human beings to sit on every live stream, and most people would be uncomfortable with that anyway,” she said.

However, Rachel Thomas, director of the Center for Applied Data Ethics at the University of San Francisco, said that social media companies are often not sufficiently transparent in their decision making processes.

“I do think that the platforms are overly opaque now in their processes, which I know can be very frustrating and difficult for users that are wrongly tamped down,” she said.