Connect with us

Section 230

Sen. Mark Warner Says His Section 230 Bill Is Crafted With Help of Tech Companies

Published

on

Photo of Sen. Mark Warner from December 2017 from his office

March 23, 2021 – Sen. Mark Warner, D-Virginia, said he and his staff are in “regular contact” with big tech representatives about Section 230 reform.

“Both my staff and I are in regular contact with a host of individuals on the tech side,” Warner said Monday at a Protocol webinar discussing internet intermediary liability provision Section 230.

“We have had a great deal of contact with Facebook; in the most senior levels on the performance team, we have had an ongoing conversation with Google, although sometimes they decided not to show in our hearings.

“My staff is in contact with major platforms entities and will continue to have a dialogue.”

The proposed legislation, which was brought forth by Warner, Mazie Hirono, D-Hawaii, and Amy Klobuchar, D-Minn., would maintain immunity from legal consequences for whatever the platforms’ users post, but makes an exemption for content that the companies get paid for.

Critics of the proposal, including Senator Ron Wyden, D-Ore., have said that, if enacted, the change would effectively create a new form of liability on commercial relationships that would force “web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech.”

After consulting with interest groups, consultants, and experts, Warner declared that it is time to make some changes and get it right. “Some say the bill doesn’t go far enough; some say it goes too far, but I’m sure we got at the right point.”

Screenshot from the webinar

To make it clear what the bill does and what it doesn’t do, Warner shared that this legislation does not restrict anyone’s right to free speech, and he still wants “customers to be able to say about the good or bad of things they got at their local restaurant.”

The changes, Warner said, will address the disparity between big and small tech companies by maintaining protections for the latter but holding the former responsible for things they get paid for.

“In the late 90s, Section 230 was built to protect tech startups,” Warner said, but it has become a “get out of jail free card” for large corporations.

Reporter Samuel Triginelli was born in Brazil and grew up speaking Portuguese and English, and later learned French and Spanish. He studied communications at Brigham Young University, where he also worked as a product administrator and UX/UI designer. He wants a world with better internet access for all.

Section 230

Greene, Paul Social Media Developments Resurface Section 230 Debate

Five days into the new year and two developments bring Section 230 protections back into focus.

Published

on

Georgia Republican Representative Marjorie Taylor Greene

WASHINGTON, January 5, 2022 – The departure of Republican Kentucky Senator Rand Paul from YouTube and the banning of Georgia Republican Representative Marjorie Taylor Greene from Twitter at the beginning of a new year has rekindled a still lit flame of what lawmakers will do about Section 230 protections for Big Tech.

Paul removed himself Monday from the video-sharing platform after getting two strikes on his channel for violating the platform’s rules on Covid-19 misinformation, saying he is “[denying] my content to Big Tech…About half of the public leans right. If we all took our messaging to outlets of free exchange, we could cripple Big Tech in a heartbeat.”

Meanwhile, Greene has been permanently suspended from Twitter following repeated violations of Twitter’s terms of service. She has previously been rebuked by both her political opponents and allies for spreading fake news and mis/disinformation since she was elected in 2020. Her rap sheet includes being accused of spreading conspiracy theories promoting white supremacy and antisemitism.

It was ultimately the spreading of Covid-19 misinformation that got Greene permanently banned from Twitter on Sunday. She had received at least three previous “strikes” related to Covid-19 misinformation, according to New York Times. Greene received a fifth strike on Sunday, which resulted in her account’s permanent suspension.

Just five days into the new year, Greene’s situation – and the quickly-followed move by Paul – has reignited the tinderbox that is Section 230 of the Communications Decency Act, which shields big technology platforms from any liability from posts by their users.

As it stands now, Twitter is well within its rights to delete or suspend the accounts of any person who violates its terms of service. The right to free speech that is protected by the First Amendment does not prevent a private corporation, such as Twitter, from enforcing their rules.

In response to her Tweets, Texas Republican Congressman Dan Crenshaw called Greene a “liar and an idiot.” His comments notwithstanding, Crenshaw, like many conservative legislators, has argued that social media companies have become an integral part of the public forum and thus should not have the authority to unilaterally ban or censor voices on their platforms.

Some states, such as Texas and Florida, have gone as far as making it illegal for companies to ban political figures. Though Florida’s bill was quickly halted in the courts, that did not stop Texas from trying to enact similar laws (though they were met with similar results).

Crenshaw himself has proposed federal amendments to Section 230 for any “interactive computer service” that generates $3 billion or more in annual revenue or has 300 million or more monthly users.

The bill – which is still being drafted and does not have an official designation – would allow users to sue social media platforms for the removal of legal content based on political views, gender, ethnicity, and race. It would also make it illegal for these companies to remove any legal, user generated content from their website.

Under Crenshaw’s bill, a company such as Facebook or Twitter could be compelled to host any legal speech – objectionable or otherwise – at the risk of being sued. This includes overtly racist, sexist, or xenophobic slurs and rhetoric. While a hosting website might be morally opposed to being party to such kinds of speech, if said speech is not explicitly illegal, it would thus be protected from removal.

While Crenshaw would amend Section 230, other conservatives have advocated for its wholesale repeal. Sen. Lindsey Graham, R-South Carolina, put forward Senate Bill 2972 which would do just that. If passed, the law would go into effect on the first day of 2024, with no replacement or protections in place to replace it.

Consequences of such legislation

This is a nightmare scenario for every company with an online presence that can host user generate content. If a repeal bill were to pass with no replacement legislation in place, every online company would suddenly become directly responsible for all user content hosted on their platforms.

With the repeal of Section 230, websites would default to being treated as publishers. If users upload illegal content to a website, it would be as if the company published the illegal content themselves.

This would likely exacerbate the issue of alleged censorship that Republicans are concerned about. The sheer volume of content generated on platforms like Reddit and YouTube would be too massive for a human moderating team to play a role in.

Companies would likely be forced to rely on heavier handed algorithms and bots to censor anything that could open them to legal liability.

Democratic views

Republicans are not alone in their criticism of Section 230, however. Democrats have also flirted with amending or abolishing Section 230, albeit for very different reasons.

Many Democrats believe that Big Tech uses Section 230 to deflect responsibility, and that if they are afforded protections by it, they will not adjust their content moderation policies to mitigate allegedly dangerous or hateful speech posted online by users with real-world consequences.

Some Democrats have written bills that would carve out numerous exemptions to Section 230. Some seek to address the sale of firearms online, others focus on the spread of Covid-19 misinformation.

Some Democrats have also introduced the Safe Tech Act, which would hold companies accountable for failing to “remove, restrict access to or availability of, or prevent dissemination of material that is likely to cause irreparable harm.”

The reality right now is that two parties are diametrically opposed on the issue of Section 230.

While Republicans believe there is unfair content moderation that disproportionately censors conservative voices, Democrats believe that Big Tech is not doing enough to moderate their content and keep users safe.

Continue Reading

Section 230

Experts Warn Against Total Repeal of Section 230

Panelists note shifting definition of offensive content.

Published

on

WASHINGTON, November 22, 2021 – Communications experts say action by Congress to essentially gut Section 230 would not truly solve any problems with social media.

Experts emphasized that it is not possible for platforms to remove from their site all content that people may believe to be dangerous. They argue that Section 230 of the Communications Decency Act, which shields platforms from legal liability with respect to what their users post, is necessary in at least some capacity.

During discussion between these experts at Broadband Breakfast’s Live Online Event on Wednesday, Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, who used to work as a content moderator, said that to a certain extent it is impossible for platforms to moderate speech that is “dangerous” because every person has differing opinions about what speech they consider to be dangerous. He says it is this ambiguity that Section 230 protects companies from.

Still, Feerst believes that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible. He believes that the effects of artificial intelligence’s use by platforms makes some degree of liability even more essential.

Particularly with the amount of online speech to be reviewed by moderators in the internet age, Feerst says the clear-cut moderation standards are too messy and expensive to be viable options.

Matt Gerst, vice president for legal and policy affairs at the Internet Association, and Shane Tews, nonresident senior fellow at the American Enterprise Institute, also say that while content moderation is complex, it is necessary. Scott McCollough, attorney at McCollough Law Firm, says large social media companies like Facebook are not the causes of all the problems with social media that are in the national spotlight right now, but rather that social features of today’s society, such as the extreme prevalence of conflict, are to blame for this focus on social media.

Proposals for change

Rick Lane, CEO of Iggy Ventures, proposes that reform of Section 230 should include a requirement for social media platforms to make very clear what content is and is not allowed on their sites. McCullough echoed this concern, saying that many moderation actions platforms take presently do not seem to be consistent with those platforms’ stated terms and conditions, and that individual states across the nation should be able to look at these instances on a case-by-case basis to determine whether platforms fairly apply their terms and conditions.

Feerst highlighted the nuance of this issue by saying that people’s definitions of “consistent” are naturally subjective, but agrees with McCullough that users who have content removed should be notified of such, as well as the reasoning for moderators’ action.

Lane also believes that rightfully included in the product of Section 230 reform will be a requirement for platforms to demonstrate a reasonable standard of care and moderate illegal and other extremely dangerous content on their sites. Tews generally agreed with Lane that such content moderation is complex, as she sees a separation between freedom of speech and illegal activity.

Gerst highlighted concerns from companies the Internet Association represents that government regulation coming from Section 230 reform will require widely varied platforms to standardize their operation approaches, diminishing innovation on the internet.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. You can watch the November 17, 2021, event on this page. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

Wednesday, November 17, 2021, 12 Noon ET — The Changing Nature of the Debate About Social Media and Section 230

Facebook is under fire as never before. In response, the social-networking giant has gone so far as to change its official name, to Meta (as in the “metaverse”). What are the broader concerns about social media beyond Facebook? How will concerns about Facebook’s practices spill over into other social media networks, and to debate about Section 230 of the Communications Act?

Panelists for this Broadband Breakfast Live Online session:

  • Scott McCullough, Attorney, McCullough Law Firm
  • Shane Tews, Nonresident Senior Fellow, American Enterprise Institute
  • Alex Feerst, Co-founder, Digital Trust & Safety Partnership
  • Rick Lane, CEO, Iggy Ventures
  • Matt Gerst, VP for Legal & Policy Affairs, Internet Association
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Shane Tews is a nonresident senior fellow at the American Enterprise Institute (AEI), where she works on international communications, technology and cybersecurity issues, including privacy, internet governance, data protection, 5G networks, the Internet of Things, machine learning, and artificial intelligence. She is also president of Logan Circle Strategies.

Alex Feerst is a lawyer and technologist focused on building systems that foster trust, community, and privacy. He leads Murmuration Labs, which helps tech companies address the risks and human impact of innovative products, and co-founded the Digital Trust & Safety Partnership, the first industry-led initiative to establish best practices for online trust and safety. He was previously Head of Legal and Head of Trust and Safety at Medium, General Counsel at Neuralink, and currently serves on the editorial board of the Journal of Online Trust & Safety, and as a fellow at Stanford University’s Center for Internet and Society.

Rick Lane is a tech policy expert, child safety advocate, and the founder and CEO of Iggy Ventures. Iggy advises and invests in companies and projects that can have a positive social impact. Prior to starting Iggy, Rick served for 15 years as the Senior Vice President of Government Affairs of 21st Century Fox.

Matt Gerst is the Vice President for Legal & Policy Affairs and Associate General Counsel at Internet Association, where he builds consensus on policy positions among IA’s diverse membership of companies that lead the internet industry. Most recently, Matt served as Vice President of Regulatory Affairs at CTIA, where he managed a diverse range of issues including consumer protection, public safety, network resiliency, and universal service. Matt received his J.D. from New York Law School, and he served as an adjunct professor of law in the scholarly writing program at the George Washington University School of Law.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

WATCH HERE, or on YouTubeTwitter and Facebook

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Section 230

Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230

The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.

Published

on

Rep. Anna Eshoo, D-California

WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.

The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.

The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.

Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.

Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.

“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”

The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

From Haugen testimony to legislation

Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.

“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.

Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.

Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.

Haugen’s testimony versus Zuckerberg’s Section 230 vision

In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”

This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.

Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”

Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.

Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.

But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.

An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.

Where the parties agree

Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.

Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.

Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.

With reporting from Riley Steward

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending