October 5, 2020 – Social media platforms like Twitter and Facebook have waited until very late in the 2020 election to update their user policies, and that decision was a conscious one, claimed David Brody, counsel on privacy and technology at the Lawyers Committee for Civil Rights Under Law, at New America’s Protecting the Vote webinar Thursday.
Brody criticized Facebook’s decision to put the same label on every piece of political content. That doesn’t signal anything significant to the user, he said.
Rather, labels placed on content that violates rules should label it and also explain what rule it violates. He also argued for hiding content behind an interstitial screen such the user knows there’s a problem with it before they see it.
Big tech platform haven’t properly grabbled with issues of disinformation issues since the 2016 election. Because these new rules are just being put in place now, their employees might not have enough time to be properly trained on enforcing new policies.
Moreover, Facebook has so much content that moderators cannot possibly read everything, said Ian Vandewalker, senior counsel at the New York University Brennan Center for Justice. Facebook would need hundreds of thousands of employees to properly police content.
Instead, Facebook has 1 enforcement person per 70,000 users, according to Brody’s calculations.
“This tells you about the scale of the problem,” he said.
Algorithms are also a concern, said Vandewalker. They push people to the most extreme groups they associate with.
“Social media isn’t just an enabler,” said Broady, “It finds these dangerous people and brings them together.” He pointed out that while an algorithm can stop and check an item while its going viral, it can’t meaningfully evaluate the nuances of content.
Yosef Getachew, director of the Common Cause Media and Democracy Program, agreed that greater transparency was necessary into tech platforms.
Panelists acknowledged that algorithms were not inherently bad and that platforms have taken varied approaches to reform, with YouTube and Google on the more aggressive side and Facebook and Twitter on the lesser.
Spandi Singh, policy analyst at New America’s Open Technology Institute, suggested platforms adopt policies like WhatsApp, which places limits on how many times a message can be forwarded.
Broady suggested that if the business model were changed so that personal data isn’t monetized, these and other problems would be addressed.
Sam Sabin, tech policy reporter at the Morning Consult, moderated the webinar.
- Breakfast Media Minute: October 29, 2020
- 5G in China, BroadbandNow’s Q3 2020 Report, FiOS Subscriber Growth Reaches 5-Year High
- Broadband Breakfast Live Online on Wednesday, October 28, 2020 — National Security, 5G and Trusted Partners
- Breakfast Media Minute: October 28, 2020
- Federal Communications Commission Vote on Net Neutrality Reprises Deep Partisan Divisions
Signup for Broadband Breakfast
Broadband Roundup1 month ago
Nathan Simington is Trump’s New Man for FCC, New Speed Test, Challenges for State Net Neutrality
Artificial Intelligence4 months ago
U.S. State Department Employing Artificial Intelligence Against COVID-19 Misinformation
Broadband's Impact3 months ago
Broadband Breakfast Live Online Launches Weekly Series Featuring ‘Champions of Broadband’
Fiber2 months ago
Ubiquitous Fiber Infrastructure is Essential to Maximize the Advantages of 5G, According to WIA Report
Open Access4 months ago
In Danville, Virginia, an Early Adopter of Open Access Seeks to Prove the Business Model
5G4 months ago
Verizon CEO Hans Vestberg Describes 5G-to-the-Home Vision, Claiming U.S. Leads in 5G Deployment
Innovation2 months ago
Governments and Central Banks Continue to Be Necessary with ‘Stable Coins’ and Cryptocurrencies
Section 2304 months ago
Parler, Gab, and Section 230: Right-Leaning Social Networks Push Alternative to Twitter and Facebook