WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.
The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.
The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.
Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.
Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.
“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”
The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”
From Haugen testimony to legislation
Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.
“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.
Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.
Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.
Haugen’s testimony versus Zuckerberg’s Section 230 vision
In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”
This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.
Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”
Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.
Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.
But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.
An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.
Where the parties agree
Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.
Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.
Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.
With reporting from Riley Steward
Parler Policy Exec Hopes ‘Sustainable’ Free Speech Change on Twitter if Musk Buys Platform
Parler’s Amy Peikoff said she wishes Twitter can follow in her social media company’s footsteps.
WASHINGTON, May 16, 2022 – A representative from a growing conservative social media platform said last week that she hopes Twitter, under new leadership, will emerge as a “sustainable” platform for free speech.
Amy Peikoff, chief policy officer of social media platform Parler, said as much during a Broadband Breakfast Live Online event Wednesday, in which she wondered about the implications of platforms banning accounts for views deemed controversial.
The social media world has been captivated by the lingering possibility that SpaceX and Tesla CEO Elon Musk could buy Twitter, which the billionaire has criticized for making decisions he said infringe on free speech.
Before Musk’s decision to go in on the company, Parler saw a surge in member sign-ups after former President Donald Trump was banned from Twitter for comments he made that the platform saw as encouraging the Capitol riots on January 6, 2021, a move Peikoff criticized. (Trump also criticized the move.)
Peikoff said she believes Twitter should be a free speech platform just like Parler and hopes for “sustainable” change with Musk’s promise.
“At Parler, we expect you to think for yourself and curate your own feed,” Peikoff told Broadband Breakfast Editor and Publisher Drew Clark. “The difference between Twitter and Parler is that on Parler the content is controlled by individuals; Twitter takes it upon itself to moderate by itself.”
She recommended “tools in the hands of the individual users to reward productive discourse and exercise freedom of association.”
Peikoff criticized Twitter for permanently banning Donald Trump following the insurrection at the U.S. Capitol on January 6, and recounted the struggle Parler had in obtaining access to hosting services on AWS, Amazon’s web services platform.
While she defended the role of Section 230 of the Telecom Act for Parler and others, Peikoff criticized what she described as Twitter’s collusion with the government. Section 230 provides immunity from civil suits for comments posted by others on a social media network.
For example, Peikoff cited a July 2021 statement by former White House Press Secretary Jen Psaki raising concerns with “misinformation” on social media. When Twitter takes action to stifle anti-vaccination speech at the behest of the White House, that crosses the line into a form of censorship by social media giants that is, in effect, a form of “state action.”
Conservatives censored by Twitter or other social media networks that are undertaking such “state action” are wrongfully being deprived of their First Amendment rights, she said.
“I would not like to see more of this entanglement of government and platforms going forward,” she said Peikoff and instead to “leave human beings free to information and speech.”
The acquisition of social media powerhouse Twitter by Elon Musk, the world’s richest man, raises a host of issues about social media, free speech, and the power of persuasion in our digital age. Twitter already serves as the world’s de facto public square. But it hasn’t been without controversy, including the platform’s decision to ban former President Donald Trump in the wake of his tweets during the January 6 attack on the U.S. Capitol. Under new management, will Twitter become more hospitable to Trump and his allies? Does Twitter have a free speech problem? How will Mr. Musk’s acquisition change the debate about social media and Section 230 of the Telecommunications Act?
Guests for this Broadband Breakfast for Lunch session:
- Amy Peikoff, Chief Policy Officer, Parler
- Drew Clark (host), Editor and Publisher, Broadband Breakfast
Amy Peikoff is the Chief Policy Officer of Parler. After completing her Ph.D., she taught at universities (University of Texas, Austin, University of North Carolina, Chapel Hill, United States Air Force Academy) and law schools (Chapman, Southwestern), publishing frequently cited academic articles on privacy law, as well as op-eds in leading newspapers across the country on a range of issues. Just prior to joining Parler, she founded and was President of the Center for the Legalization of Privacy, which submitted an amicus brief in United States v. Facebook in 2019.
Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.
As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.
Leave Section 230 Alone, Panelists Urge Government
The debate on what government should — or shouldn’t — do with respect to liability protections for platforms continues.
WASHINGTON, May 10, 2022 – A panelist at a Heritage Foundation event on Thursday said that the government should not make changes to Section 230, which protects online platforms from being liable for the content their users post.
However, the other panelist, Newsweek Opinion Editor Josh Hammer, said technology companies have been colluding with the government to stifle speech. Hammer said that Section 230 should be interpreted and applied more vigorously against tech platforms.
Countering this view was Niam Yaraghi, senior fellow at the Brookings Institution’s Center for Technology Innovation.
“While I do agree with the notion that what these platforms are doing is not right, I am much more optimistic” than Hammer, Yaraghi said. “I do not really like the government to come in and do anything about it, because I believe that a capitalist market, an open market, would solve the issue in the long run.”
Addressing a question from the moderator about whether antitrust legislation or stricter interpretation of Section 230 should be the tool to require more free speech on big tech platforms, Hammer said that “Section 230 is the better way to go here.”
Yaraghi, by contrast, said that it was incumbent on big technology platforms to address content moderation, not the government.
In March, Vint Cerf, a vice president and chief internet evangelist at Google, and the president of tech lobbyist TechFreedom warned against government moderation of content on the internet as Washington focuses on addressing the power of big tech platforms.
While some say Section 230 only protects “neutral platforms”, others claim it allows powerful companies to ignore user harm. Legislation from the likes of Amy Klobuchar, D-Minn., would exempt 230 protections for platforms that fail to address Covid mis- and disinformation.
Correction: A previous version of this story said Sen. Ron Wyden, D-Ore., agreed that Section 230 only protected “neutral platforms,” or that it allowed tech companies to ignore user harm. Wyden, one of the authors of the provision in the 1996 Telecom Act, instead believes that the law is a “sword and shield” to protect against small companies, organizations and movements against legal liability for what users post on their websites.
Additional correction: A previous version of this story misattributed a statement by Niam Yaraghi to Josh Hammer. The story has been corrected, and additional context added.
Reforming Section 230 Won’t Help With Content Moderation, Event Hears
Government is ‘worst person’ to manage content moderation.
WASHINGTON, April 11, 2022 — Reforming Section 230 won’t help with content moderation on online platforms, observers said Monday.
“If we’re going to have some content moderation standards, the government is going to be, usually, the worst person to do it,” said Chris Cox, a member of the board of directors at tech lobbyist Net Choice and a former Congressman.
These comments came during a panel discussion during an online event hosted by the American Enterprise Institute that focused on speech regulation and Section 230, a provision in the Communications Decency Act that protects technology platforms from being liable for posts by their users.
“Content moderation needs to be handled platform by platform and rules need to be established by online communities according to their community standards,” Cox said. “The government is not very competent at figuring out the answers to political questions.”
There was also discussion about the role of the first amendment in content moderation on platforms. Jeffrey Rosen, a nonresident fellow at AEI, questioned if the first amendment provides protection for content moderation by a platform.
“The concept is that the platform is not a publisher,” he said. “If it’s not [a publisher], then there’s a whole set of questions as to what first amendment interests are at stake…I don’t think that it’s a given that the platform is the decider of those content decisions. I think that it’s a much harder question that needs to be addressed.”
Late last year, experts said that it is not possible for platforms to remove from their site all content that people may believe to be dangerous during a Broadband Breakfast Live Online event. However some, like Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, believe that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible.
- Sen. Bennet Says Coloradans’ Complaints About Poor Broadband Drove Passage of Infrastructure Act
- Broadband Notice of Funding Availability Seeks to Balance Requirements with Flexibility
- Sean Gonsalves: NTIA Assistant Secretary Alan Davidson Dishes on BEAD at Mountain Connect 2022
- NTIA Broadband Official Scott Woods Joins Ready as Vice President of Community Engagement
- New Public Broadband Association Criticizes NTIA Rules, Boasts Strong Start for New Group
- NTIA Doing All it Can to ‘Pressure’ States to Allow Municipal Broadband for Infrastructure Builds
Signup for Broadband Breakfast
Broadband Roundup3 months ago
Microsoft App Store Rules, California Defers on Sprint 3G Phase-Out, Samsung’s New IoT Guy
Broadband Roundup4 months ago
‘Buy American’ Waiver Request, AT&T Cuts Dividend for Builds, Jamestown Municipal Broadband Program
Broadband Roundup3 months ago
More From Emergency Connectivity Fund, Rootmetrics Says AT&T Leads, Applause for House Passing Chips Act
WISP3 months ago
Wireless Internet Service Providers Association CEO Claude Aiken to Step Down in April 2022
Broadband Roundup4 months ago
AT&T Speeds Tiers, Wisconsin Governor on Broadband Assistance, Broadband as Public Utility
Big Tech3 months ago
‘Cartel’ is ‘Most Absurd Term Ever’ for Media Allowed Revenue Share With Tech Platforms: NMA
Broadband Roundup3 months ago
Rosenworcel’s Proposal for 9-1-1, Harris to Talk Broadband, AT&T Joins Ericsson Startup 5G Program
Broadband Roundup2 weeks ago
Google Facing App Store Suit, Shareholder Suit Against Twitter Buy, Fiber Optic Technician Training Nationwide