Social Media Platforms Are Behind the Curve in Responding to Election 2020 Disinformation
October 5, 2020 – Social media platforms like Twitter and Facebook have waited until very late in the 2020 election to update their user policies, and that decision was a conscious one, claimed David Brody, counsel on privacy and technology at the Lawyers Committee for Civil Rights Under Law, at New
Liana Sowa
October 5, 2020 – Social media platforms like Twitter and Facebook have waited until very late in the 2020 election to update their user policies, and that decision was a conscious one, claimed David Brody, counsel on privacy and technology at the Lawyers Committee for Civil Rights Under Law, at New America’s Protecting the Vote webinar Thursday.
Brody criticized Facebook’s decision to put the same label on every piece of political content. That doesn’t signal anything significant to the user, he said.
Rather, labels placed on content that violates rules should label it and also explain what rule it violates. He also argued for hiding content behind an interstitial screen such the user knows there’s a problem with it before they see it.
Big tech platform haven’t properly grabbled with issues of disinformation issues since the 2016 election. Because these new rules are just being put in place now, their employees might not have enough time to be properly trained on enforcing new policies.
Moreover, Facebook has so much content that moderators cannot possibly read everything, said Ian Vandewalker, senior counsel at the New York University Brennan Center for Justice. Facebook would need hundreds of thousands of employees to properly police content.
Instead, Facebook has 1 enforcement person per 70,000 users, according to Brody’s calculations.
“This tells you about the scale of the problem,” he said.
Algorithms are also a concern, said Vandewalker. They push people to the most extreme groups they associate with.
“Social media isn’t just an enabler,” said Broady, “It finds these dangerous people and brings them together.” He pointed out that while an algorithm can stop and check an item while its going viral, it can’t meaningfully evaluate the nuances of content.
Yosef Getachew, director of the Common Cause Media and Democracy Program, agreed that greater transparency was necessary into tech platforms.
Panelists acknowledged that algorithms were not inherently bad and that platforms have taken varied approaches to reform, with YouTube and Google on the more aggressive side and Facebook and Twitter on the lesser.
Spandi Singh, policy analyst at New America’s Open Technology Institute, suggested platforms adopt policies like WhatsApp, which places limits on how many times a message can be forwarded.
Broady suggested that if the business model were changed so that personal data isn’t monetized, these and other problems would be addressed.
Sam Sabin, tech policy reporter at the Morning Consult, moderated the webinar.