Social Media
Social Media Platforms Are Behind the Curve in Responding to Election 2020 Disinformation

October 5, 2020 – Social media platforms like Twitter and Facebook have waited until very late in the 2020 election to update their user policies, and that decision was a conscious one, claimed David Brody, counsel on privacy and technology at the Lawyers Committee for Civil Rights Under Law, at New America’s Protecting the Vote webinar Thursday.
Brody criticized Facebook’s decision to put the same label on every piece of political content. That doesn’t signal anything significant to the user, he said.
Rather, labels placed on content that violates rules should label it and also explain what rule it violates. He also argued for hiding content behind an interstitial screen such the user knows there’s a problem with it before they see it.
Big tech platform haven’t properly grabbled with issues of disinformation issues since the 2016 election. Because these new rules are just being put in place now, their employees might not have enough time to be properly trained on enforcing new policies.
Moreover, Facebook has so much content that moderators cannot possibly read everything, said Ian Vandewalker, senior counsel at the New York University Brennan Center for Justice. Facebook would need hundreds of thousands of employees to properly police content.
Instead, Facebook has 1 enforcement person per 70,000 users, according to Brody’s calculations.
“This tells you about the scale of the problem,” he said.
Algorithms are also a concern, said Vandewalker. They push people to the most extreme groups they associate with.
“Social media isn’t just an enabler,” said Broady, “It finds these dangerous people and brings them together.” He pointed out that while an algorithm can stop and check an item while its going viral, it can’t meaningfully evaluate the nuances of content.
Yosef Getachew, director of the Common Cause Media and Democracy Program, agreed that greater transparency was necessary into tech platforms.
Panelists acknowledged that algorithms were not inherently bad and that platforms have taken varied approaches to reform, with YouTube and Google on the more aggressive side and Facebook and Twitter on the lesser.
Spandi Singh, policy analyst at New America’s Open Technology Institute, suggested platforms adopt policies like WhatsApp, which places limits on how many times a message can be forwarded.
Broady suggested that if the business model were changed so that personal data isn’t monetized, these and other problems would be addressed.
Sam Sabin, tech policy reporter at the Morning Consult, moderated the webinar.
Free Speech
Social Media Conspiracies Fuel Extremism, Says GWU Panel

October 5, 2020 – Social media platforms like Twitter and Facebook have waited until very late in the 2020 election to update their user policies, and that decision was a conscious one, claimed David Brody, counsel on privacy and technology at the Lawyers Committee for Civil Rights Under Law, at New America’s Protecting the Vote webinar Thursday.
Brody criticized Facebook’s decision to put the same label on every piece of political content. That doesn’t signal anything significant to the user, he said.
Rather, labels placed on content that violates rules should label it and also explain what rule it violates. He also argued for hiding content behind an interstitial screen such the user knows there’s a problem with it before they see it.
Big tech platform haven’t properly grabbled with issues of disinformation issues since the 2016 election. Because these new rules are just being put in place now, their employees might not have enough time to be properly trained on enforcing new policies.
Moreover, Facebook has so much content that moderators cannot possibly read everything, said Ian Vandewalker, senior counsel at the New York University Brennan Center for Justice. Facebook would need hundreds of thousands of employees to properly police content.
Instead, Facebook has 1 enforcement person per 70,000 users, according to Brody’s calculations.
“This tells you about the scale of the problem,” he said.
Algorithms are also a concern, said Vandewalker. They push people to the most extreme groups they associate with.
“Social media isn’t just an enabler,” said Broady, “It finds these dangerous people and brings them together.” He pointed out that while an algorithm can stop and check an item while its going viral, it can’t meaningfully evaluate the nuances of content.
Yosef Getachew, director of the Common Cause Media and Democracy Program, agreed that greater transparency was necessary into tech platforms.
Panelists acknowledged that algorithms were not inherently bad and that platforms have taken varied approaches to reform, with YouTube and Google on the more aggressive side and Facebook and Twitter on the lesser.
Spandi Singh, policy analyst at New America’s Open Technology Institute, suggested platforms adopt policies like WhatsApp, which places limits on how many times a message can be forwarded.
Broady suggested that if the business model were changed so that personal data isn’t monetized, these and other problems would be addressed.
Sam Sabin, tech policy reporter at the Morning Consult, moderated the webinar.
Social Media
Transition Between White House Social Media Accounts More Complicated Than in 2016

October 5, 2020 – Social media platforms like Twitter and Facebook have waited until very late in the 2020 election to update their user policies, and that decision was a conscious one, claimed David Brody, counsel on privacy and technology at the Lawyers Committee for Civil Rights Under Law, at New America’s Protecting the Vote webinar Thursday.
Brody criticized Facebook’s decision to put the same label on every piece of political content. That doesn’t signal anything significant to the user, he said.
Rather, labels placed on content that violates rules should label it and also explain what rule it violates. He also argued for hiding content behind an interstitial screen such the user knows there’s a problem with it before they see it.
Big tech platform haven’t properly grabbled with issues of disinformation issues since the 2016 election. Because these new rules are just being put in place now, their employees might not have enough time to be properly trained on enforcing new policies.
Moreover, Facebook has so much content that moderators cannot possibly read everything, said Ian Vandewalker, senior counsel at the New York University Brennan Center for Justice. Facebook would need hundreds of thousands of employees to properly police content.
Instead, Facebook has 1 enforcement person per 70,000 users, according to Brody’s calculations.
“This tells you about the scale of the problem,” he said.
Algorithms are also a concern, said Vandewalker. They push people to the most extreme groups they associate with.
“Social media isn’t just an enabler,” said Broady, “It finds these dangerous people and brings them together.” He pointed out that while an algorithm can stop and check an item while its going viral, it can’t meaningfully evaluate the nuances of content.
Yosef Getachew, director of the Common Cause Media and Democracy Program, agreed that greater transparency was necessary into tech platforms.
Panelists acknowledged that algorithms were not inherently bad and that platforms have taken varied approaches to reform, with YouTube and Google on the more aggressive side and Facebook and Twitter on the lesser.
Spandi Singh, policy analyst at New America’s Open Technology Institute, suggested platforms adopt policies like WhatsApp, which places limits on how many times a message can be forwarded.
Broady suggested that if the business model were changed so that personal data isn’t monetized, these and other problems would be addressed.
Sam Sabin, tech policy reporter at the Morning Consult, moderated the webinar.
Section 230
Crackdown on Online Conspiracy Speakers After January 6 Highlights Need for Platform Accountability

October 5, 2020 – Social media platforms like Twitter and Facebook have waited until very late in the 2020 election to update their user policies, and that decision was a conscious one, claimed David Brody, counsel on privacy and technology at the Lawyers Committee for Civil Rights Under Law, at New America’s Protecting the Vote webinar Thursday.
Brody criticized Facebook’s decision to put the same label on every piece of political content. That doesn’t signal anything significant to the user, he said.
Rather, labels placed on content that violates rules should label it and also explain what rule it violates. He also argued for hiding content behind an interstitial screen such the user knows there’s a problem with it before they see it.
Big tech platform haven’t properly grabbled with issues of disinformation issues since the 2016 election. Because these new rules are just being put in place now, their employees might not have enough time to be properly trained on enforcing new policies.
Moreover, Facebook has so much content that moderators cannot possibly read everything, said Ian Vandewalker, senior counsel at the New York University Brennan Center for Justice. Facebook would need hundreds of thousands of employees to properly police content.
Instead, Facebook has 1 enforcement person per 70,000 users, according to Brody’s calculations.
“This tells you about the scale of the problem,” he said.
Algorithms are also a concern, said Vandewalker. They push people to the most extreme groups they associate with.
“Social media isn’t just an enabler,” said Broady, “It finds these dangerous people and brings them together.” He pointed out that while an algorithm can stop and check an item while its going viral, it can’t meaningfully evaluate the nuances of content.
Yosef Getachew, director of the Common Cause Media and Democracy Program, agreed that greater transparency was necessary into tech platforms.
Panelists acknowledged that algorithms were not inherently bad and that platforms have taken varied approaches to reform, with YouTube and Google on the more aggressive side and Facebook and Twitter on the lesser.
Spandi Singh, policy analyst at New America’s Open Technology Institute, suggested platforms adopt policies like WhatsApp, which places limits on how many times a message can be forwarded.
Broady suggested that if the business model were changed so that personal data isn’t monetized, these and other problems would be addressed.
Sam Sabin, tech policy reporter at the Morning Consult, moderated the webinar.
-
Artificial Intelligence1 month ago
U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations
-
Broadband Roundup2 months ago
Benton on Middle Mile Open Access Networks, CENIC Fiber Route in California, Investors Buying Bitcoin
-
Section 2303 months ago
President Trump’s FCC Nominee Grilled on Section 230 During Senate Confirmation Hearing
-
#broadbandlive4 months ago
Broadband Breakfast Live Online on Wednesday, November 18, 2020 — Case Studies of Transformative 5G Apps in the Enterprise
-
Artificial Intelligence1 week ago
Artificial Intelligence Aims to Enhance Human Capabilities, But Only With Caution and Safeguards
-
Broadband Roundup2 months ago
Trump Signs Executive Order on Artificial Intelligence, How Not to Wreck the FCC, Broadband Performance in Europe
-
5G2 months ago
5G Stands to Impact Industry Before Consumers, Says Verizon CEO Hans Vestberg
-
#broadbandlive4 months ago
Broadband Breakfast Live Online on Wednesday, September 30, 2020 — Champions of Broadband: Sunne McPeak