Connect with us

Social Media

Twitter Takeover by Elon Musk Forces Conflict Over Free Speech on Social Networks

Transparency laws in Calif. and N.Y. are the ‘liberal’ counterpart to the ‘conservative’ speech laws in Texas and Florida.

Published

on

WASHINGTON, November 23, 2022 — As the Supreme Court prepares to hear two cases that may decide the future of content moderation, panelists on a Broadband Breakfast Live Online panel disagreed over the steps that platforms can and should take to ensure fairness and protect free speech.

Mike Masnick, founder and editor of Techdirt, argued that both sides of the aisle were attempting to control speech in one way or another, pointing to laws in California and New York as the liberal counterpoints to the laws in Texas and Florida that are headed to the Supreme Court.

“They’re not as blatant, but they are nudging companies to moderate in a certain way,” he said. “And I think those are equally unconstitutional.”

Censorship posed a greater threat to the ideal of free speech than would a law forcing platforms to carry certain content, said Bret Swanson, a nonresident senior fellow at the American Enterprise Institute.

“Free speech and pluralism, as an ethos for the country and really for the West, are in fact more important than the First Amendment,” he said.

At the same time, content moderation legislation is stalled by a sharp partisan divide, said Mark MacCarthy, a nonresident senior fellow in governance studies at the Brookings Institution’s Center for Technology Innovation.

“Liberals and progressives want action to remove lies and hate speech and misinformation from social media and the conservatives want equal time for conservative voices, so there’s a logjam gridlock that can’t move,” he said. “I think it might be broken if, as I predict, the Supreme Court says that the only way you can regulate social media companies is through transparency.”

Twitter’s past and current practices raise questions about bias and free speech

While talking about Elon Musk’s controversial changes to Twitter’s content moderation practices, panelists also discussed the impact of Musk’s rhetoric surrounding the topic more broadly.

“Declaring yourself as a free speech site without understanding what free speech actually means is something that doesn’t last very long,” Masnick said.

When a social media company like Twitter or Parler declares itself to be a “free speech site,” it is really just sending a signal to some of the worst people and trolls online to begin harassment, abuse and bigotry, he said.

That is not a sustainable business model, Masnick argued.

But Swanson took the opposite approach. He called Musk’s acquisition of Twitter “a real seminal moment in the history and the future of free speech,” and called it an antidote to “the most severe collapse of free speech maybe in American history.”

MacCarthy said he didn’t believe the oft-repeated assertion that Twitter was biased against conservatives before most Musk took over. “The only study I’ve seen of political pluralism on Twitter — and it was done by Twitter itself back when they had the staff to do that kind of thing — suggested that Twitter’s amplification and recommendation engines actually favored conservative tweets over liberal ones.”

Masnick agreed, pointing to other academic studies: “They seemed to bend over backwards to often allow conservatives to break the rules more than others,” he said.

Randolph May, president of The Free State Foundation, said that he was familiar with the studies but disagreed with their findings.

Citing the revelations from the laptop of Hunter Biden, a story that the New York Post broke in October 2020 about the Joe Biden’s son, May said: “To me, that that was a consequential censorship action. Then six months later before a congressional committee, [Twitter CEO] Jack Dorsey said, ‘Oops, we made we made a big mistake when we took down the New York Post stories.’”

Multiple possibilities for the future of content moderation

Despite his criticism of current practices, May said he did not believe platforms should eliminate content moderation practices altogether. He drew a distinction between topics subject to legitimate public debate and those posts that encourage terrorism or facilitate sex trafficking. Those kinds of posts should be subject to moderation practices, he said.

May made three suggestions for better content moderation practices: First, platforms should establish a presumption that they will not censor or downgrade material without clear evidence that their terms of service have been violated.

Second, platforms should work to enable tools that facilitate personalization of the user experience.

Finally, the current state of Section 230 immunity should be replaced with a “reasonableness standard,” he said.

Other panelists disagreed with the subjectivity of such a reasonableness standard. MacCarthy highlighted the Texas social media law, which bans discrimination based on viewpoint. “Viewpoint is undefined: What does that mean?” he asked.

“Does it mean you can’t get rid of Nazi speech, you can’t get rid of hate speech, you can’t get rid of racist speech? What does it mean? No one knows. And so here’s a requirement of government that no one can interpret. If I were the Supreme Court, I’d declare that void for vagueness in a moment.”

MacCarthy predicted that the Supreme Court would reject the content-based provisions in the Texas and Florida laws while upholding the transparency standard, opening the door, he argued, for bipartisan transparency legislation.

But to Masnick, even merely a transparency requirement would be an unsatisfactory result: “How would conservatives feel if the government said, ‘Fox News needs to be transparent about how they make their editorial decision making?’”

“I think everyone would recognize immediately that that is a huge First Amendment concern,” he said.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, November 23, 2022, 12 Noon ET – Elon and Ye and Donald, Oh My!

With Elon Musk finally taking the reins at Twitter after a tumultuous acquisition process, what additional new changes will come to the world’s de facto public square? The world’s richest man has already reinstated certain banned accounts, including that of former president Donald Trump. Trump has made his own foray into the world of conservative social media, as has politically polarizing rapper Ye, formerly Kanye West, currently in the process of purchasing right-wing alternative platform Parler. Ye is no stranger to testing the limits of controversial speech. With Twitter in the hands of Musk, Parler in the process of selling and Trump’s Truth Social sort-of-kind-of forging ahead in spite of false starts, is a new era of conservative social media upon us?

Panelists

  • Mark MacCarthy, Nonresident Senior Fellow in Governance Studies, Center for Technology Innovation, Brookings Institution
  • Mike Masnick, Founder and Editor, Techdirt
  • Randolph May, President, The Free State Foundation
  • Bret Swanson, Nonresident Senior Fellow, American Enterprise Institute
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

Mark MacCarthy is a Nonresident Senior Fellow in Governance Studies at the Center for Technology Innovation at Brookings. He is also adjunct professor at Georgetown University in the Graduate School’s Communication, Culture, & Technology Program and in the Philosophy Department. He teaches courses in the governance of emerging technology, AI ethics, privacy, competition policy for tech, content moderation for social media, and the ethics of speech. He is also a Nonresident Senior Fellow in the Institute for Technology Law and Policy at Georgetown Law.

Mike Masnick is the founder and editor of the popular Techdirt blog as well as the founder of the Silicon Valley think tank, the Copia Institute. In both roles, he explores the intersection of technology, innovation, policy, law, civil liberties, and economics. His writings have been cited by Congress and the EU Parliament. According to a Harvard Berkman Center study, his coverage of the SOPA copyright bill made Techdirt the most linked-to media source throughout the course of that debate.

Randolph May is founder and president of The Free State Foundation, an independent, non-profit free market-oriented think tank founded in 2006. He has practiced communications, administrative, and regulatory law as a partner at major national law firms. From 1978 to 1981, May served as Assistant General Counsel and Associate General Counsel at the Federal Communication Commission. He is a past Chair of the American Bar Association’s Section of  Administrative Law and Regulatory Practice.

Bret Swanson is president of the technology research firm Entropy Economics LLC, a nonresident senior fellow at the American Enterprise Institute, a visiting fellow at the Krach Institute for Tech Diplomacy at Purdue University and chairman of the Indiana Public Retirement System (INPRS). He writes the Infonomena newsletter at infonomena.substack.com.

Drew Clark (moderator) is CEO of Breakfast Media LLC, the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Under the American Recovery and Reinvestment Act of 2009, he served as head of the State Broadband Initiative in Illinois. Now, in light of the 2021 Infrastructure Investment and Jobs Act, attorney Clark helps fiber-based and wireless clients secure funding, identify markets, broker infrastructure and operate in the public right of way.

Social media controversy has centered around Elon Musk’s Twitter, Ye’s new role in Parler, and former U.S. President Donald Trump

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Reporter Em McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for the student newspaper. In addition to agency and freelance marketing experience, she has reported extensively on Section 230, big tech, and rural broadband access. She is a founding board member of Code Open Sesame, an organization that teaches computer programming skills to underprivileged children.

Free Speech

Additional Content Moderation for Section 230 Protection Risks Reducing Speech on Platforms: Judge

People will migrate from platforms with too stringent content moderation measures.

Published

on

By

Photo of Douglas Ginsburg by Barbara Potter/Free to Choose Media

WASHINGTON, March 13, 2023 – Requiring companies to moderate more content as a condition of Section 230 legal liability protections runs the risk of alienating users from platforms and discouraging communications, argued a judge of the District of Columbia Court of Appeal last week.

“The criteria for deletion are vague and difficult to parse,” Douglas Ginsburg, a Ronald Reagan appointee, said at a Federalist Society event on Wednesday. “Some of the terms are inherently difficult to define and policing what qualifies as hate speech is often a subjective determination.”

“If content moderation became very rigorous, it is obvious that users would depart from platforms that wouldn’t run their stuff,” Ginsburg added. “And they will try to find more platforms out there that will give them a voice. So, we’ll have more fragmentation and even less communication.”

Ginsburg noted that the large technology platforms already moderate a massive amount of content, adding additional moderation would be fairly challenging.

“Twitter, YouTube and Facebook  remove millions of posts and videos based on those criteria alone,” Ginsburg noted. “YouTube gets 500 hours of video uploaded every minute, 3000 minutes of video coming online every minute. So the task of moderating this is obviously very challenging.”

John Samples, a member of Meta’s Oversight Board – which provides direction for the company on content – suggested Thursday that out-of-court dispute institutions for content moderation may become the preferred method of settlement.

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples.

“It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world, referring to the European Union’s Digital Services Act that went into effect in November of 2022, which requires platforms to remove illegal content and ensure that users can contest removal of their content.

Continue Reading

Section 230

Section 230 Shuts Down Conversation on First Amendment, Panel Hears

The law prevents discussion on how the first amendment should be applied in a new age of technology, says expert.

Published

on

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.  

Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms. 

We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products. 

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said. 

Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation. 

Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued. 

Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said. 

Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users. 

It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said. 

To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.  

It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman. 

This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.   

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Continue Reading

Free Speech

Creating Institutions for Resolving Content Moderation Disputes Out-of-Court

Private institutions may become primary method for content moderation disputes, says expert.

Published

on

Photo of John Samples, member of Meta's Oversight Board

WASHINGTON, March 9, 2023 – A member of Meta’s oversight board, John Samples, suggested that out-of-court dispute institutions for content moderation may become the preferred method of settlement in Broadband Breakfast’s Big Tech & Speech Summit Thursday. 

Meta’s oversight board was created by the company to support free speech by upholding or reversing Facebook’s content moderation decisions. It works independently of the company and hosts 40 members around the world.  

The European Union’s Digital Services Act, which came into force in November of 2022, requires platforms to remove illegal content and ensure that users can contest removal of their content. It clarifies that platforms are only liable for users’ unlawful behavior if they are aware of it and fail to remove it. 

The Act specifies illegal speech to include speech that does harm to the electoral system, hate speech, and speech that harms fundamental rights. The appeals process allows citizens to go directly to the company, the national courts, or out-of-court dispute resolution institutions, none of which currently exist in Europe. 

According to Samples, the Act opens the way for private organizations like the oversight board to play a part in moderation disputes. “Meta has a tremendous advantage here as a first mover,” said Samples, “and the model of the oversight board may well spread to Europe and perhaps other places.” 

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples. “It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world.  

The private option of self-regulation has worked well, said Samples. “It may well be expanding throughout much of the world. If it goes to Europe, it could go throughout.” 

Currently, of the media that Meta reviews for moderation, only one percent is restricted, either by taking down the content or reducing the size of the audience exposed to it, said Samples. The oversight board primarily rules against Meta’s decisions and accepts comments from independent interests.  

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending