Witnesses Blame Social Media Algorithms for Spread of Misinformation
June 25, 2020 — Algorithms utilized by platforms to keep users engaged are partially to blame for the heightened state of disinformation which poses a threat to the future of American democracy, argued members of the Subcommittee on Communications and Technology and the Subcommittee on Consumer Prot
Jericho Casper
June 25, 2020 — Algorithms utilized by platforms to keep users engaged are partially to blame for the heightened state of disinformation which poses a threat to the future of American democracy, argued members of the Subcommittee on Communications and Technology and the Subcommittee on Consumer Protections and Commerce at a hearing Wednesday.
“While our nation has long been divided, the divisions in our country are growing,” said Rep. Mike Doyle, D-Penn. “Today we see that much of this division has been driven by misinformation distributed and amplified by social media companies, the largest among them being Facebook, YouTube and Twitter.”
Members blamed the explosion of disinformation on foreign and domestic opportunists aiming to divide people and weaken democracy, in their personal pursuit of power.
“This includes social media companies themselves, who have put profits before people and whose business models depend on the engaging and enraging,” Doyle said.
According to Hany Farid, a professor at the University of California Berkeley, a study revealed that those who primarily obtain news from social media believe 1.4 times more misinformation than others.
“Platforms don’t set out to fuel misinformation, hate, and divisiveness, but that’s what the algorithms have learned to push in their attempts to increase user engagement,” Farid said.
Algorithms put in place to fight for users’ attention have been found to confirm biases, feeding audiences what they already believe and want to hear.
“Every day social media platforms decide what is relevant by recommending it to their billions of users,” Farid said. “Social media companies have learned that outrageous, divisive, conspiratorial content increases engagement.”
Farid cited a study finding that 10 percent of YouTube-recommended videos contain conspiracy theories or falsities. This demonstrates how YouTube’s algorithm is directly responsible for furthering the spread of misinformation, he said.
“The core poison is the business model,” Farid continued. “They could change the algorithms and focus not all on profit. They could retrain the algorithm so trusted information is prioritized.”
“Content creators could simply decide that they value trusted information over untrusted information, respectful over hateful, and unifying over divisive,” he concluded.
Others claimed that Farid was overstating the ease of such a transition.
Calling for increased platform regulation is essentially asking companies to reduce their profits by altering their business models. The development of new AI technology poses an additional challenge.
Artificial intelligence that may more effectively assist in content moderation has yet to be developed, as companies have not prioritized it in the past.
Members warned that the use of AI is partially responsible for the current state of disinformation and that systems trained to identify hate speech may inadvertently amplify racial tensions.