Connect with us

Big Tech

The Rise, Reign, and Self-Repair of Zoom

Published

on

The deceptively named “Ask Eric Anything” weekly webinar is more of a corporate team effort that resembles The Brady Bunch than a lone Thermopylae-like press conference

May 8, 2020 – Eric Yuan came up with the idea for Zoom as a student while taking 10-hour train rides to visit his girlfriend in China. In 2011 he left Cisco Webex to found Zoom in San Jose, California, with the mission “to make video communications frictionless.” Zoom earned a billion-dollar valuation by 2017 and went public in 2019 in one of the most successful IPOs of that year.

And then the coronavirus appeared in Zoom’s waiting room, and it was not to be ejected from the chat.

As Americans have entered a world riddled with tele-prefixes, Zoom, whether it has wanted to or not, has entered the pantheon of Tide and Alexa to become a household name. By April 1, the number of Zoom’s daily participants skyrocketed from 10 million in 2019 to 200 million.

Indeed, Zoom became overnight king of an increasingly-important industry thrust into new prominence by the pandemic: Videoconferencing.

As hundreds of millions of Americans and billions of global citizens adjust to new norms for work, medicine, and education, Zoom has emerged as the go-to application, cutting commute times to zero.

What is Zoom and what propelled it to widespread name recognition? It’s not Webex

The most likely answer to what propelled Zoom to prominence comes from its mission statement— “to make video communications frictionless.”

Rachna Sizemore Heizer, a member at large of the Fairfax County Public Schools Board, highlighted simplicity as an advantage in her initial decision to use Zoom for her school board meetings. “It’s easier to understand if you’re new to the stuff,” Heizer said.

Cynthia Jelke, 18, a sophomore at Tufts University, found Zoom essential to her success. “I genuinely wouldn’t be able to do my education without it,” Jelke said.

Even the Federal Communications Commission, the agency tasked with improving communications, drew criticism on Tuesday for using Cisco Webex video conferencing technology to launch its Rural Digital Opportunity Fund auction webinar.

The web seminar designed to teach applicants how to apply for for more than $20 billion worth of funds ended up turning away business and media leaders due to a clunky audio-capacity limitations.

Commentators in the chat box complained in real time of the frustrations they faced. User “Natee” chirped at 4:10 p.m. on that webinar: “Webex is no good.  That is why the original Webex developer created Zoom.”

Workplaces and schools have taken to Zoom

The workforce has also taken quickly to the interface. Patrick McGrath, a software engineer from Chicago, praised Zoom for its Whiteboarding feature, which allows users to sketch concepts in a creative and expressive way. “It allows for collaboration,” McGrath said in an interview with Broadband Breakfast.

Then there are the memes. Perhaps because Zoom resonated with teenagers, many of whom have had to use Zoom for school, it has become an endless generator for viral content and a hub for consolidating a shared experience.

Students from different colleges started saying that they all attend “Zoom University.” Zoom University T-shirt vendors began popping up online.

Zoom has been an endless source of inspiration for meme artists

 

Zoom also offers the option to easily customize one’s background without a green screen, adding a touch of personalization that is reminiscent of social media.

The videoconferencing service has a “hotter brand” than other teleconferencing companies, Rishi Jaluria, a senior research analyst at D.A. Davidson told The New York Times. “Younger people don’t want to use the older technology.”

Joshua Rush, 18, a high school senior in Los Angeles, told the Times: “Out of nowhere, I feel like Zoom has clout.”

The memes “help lighten to mood of being kicked out of your school,” Tufts sophomore Jelke told Broadband Breakfast.

If there was any doubt that Zoom had chiseled a frieze in the pantheon of pop culture, Saturday Night Live’s first virtual episode put that skepticism to rest.

“Live from Zoom, it’s Saturday Night Live,” announced the cast of SNL, who used Zoom for large swaths of its episode on April 11.

Tom Hanks, the host of the episode and a popular coronavirus survivor, had fun with the monologue, using video cuts and costumes to play different characters. The episode featured many playful jabs at the ubiquitous platform, and one sketch dedicated to Zoom profiled common videoconferencing personalities.

OK, so why is Zoom suffering?

Zoom Sales Leader Colby Nish with company logo in the back: “Delivering happiness”

Almost as quickly as “Zoom” has become a verb, “Zoombombing” has entered the national lexicon. Zoombombing occurs when a Zoom meeting host or attendee leaves the join URL unattended, which, in the world of the internet, can happen many different ways.

A prankster can then use this neglected link to crash a meeting and broadcast improper material, such as pornography or racist content. The FBI issued a warning about Zoombombing on March 30 — but that hasn’t curbed the rise of this new breed of troll.

The Anti-Defamation League has already documented 21 instances (as of April 6) of anti-Semitic Zoombombing at the levels of government, school and worship.

Journalists Kara Swisher of The New York Times and Jessica Lenin of The Information were forced to shut down their Zoom webinar on feminism in tech on March 15, when trolls broke into their meeting and began broadcasting a shock video. A meeting of the Indiana Election Commission was interrupted by a video of a man masturbating.

The graphic examples don’t stop there:

When McGrath, the software engineer from Chicago, discussed the issue of security, he responded “we have a definite team to take care of that… It’s totally because of the security concerns that have been going around.”

From Zoombombing to… other privacy and security concerns

Then there’s the issue of privacy.

As early as March 26, Vice reported that it had uncovered that Zoom had been sharing its users’ data with Facebook without their knowledge.

Data being shared included when the user opens the app, details on the user’s device such as the model, the time zone and city the user is connecting from, which phone carrier the user is using, and information that allows third-party companies to target a user with advertisements.

“That’s shocking. There is nothing in the privacy policy that addresses that,” Pat Walshe, an activist from Privacy Matters who has analyzed Zoom’s privacy policy, said in a Twitter direct message with Vice.

And then there’s the issue of the Chinese server.

The University of Toronto’s Citizen Lab published a report showing that some Zoom user data is accessible by the company’s server in China, “even when all meeting participants, and the Zoom subscriber’s company, are outside of China,” the authors of the report wrote.

The Toronto lab also noted that Zoom’s arrangement of owning three Chinese-based companies and employing 700 Chinese mainland software developers “may make Zoom responsive to pressure from Chinese authorities.”  These vulnerabilities give the Chinese government a way to tap in to Zoom phone calls said Bill Marczak, a research fellow at Citizen Lab.

Zoom’s claim to offer end-to-end encryption was scrutinized by The Intercept and found to be wrong. The company was forced to backtrack and apologize in a blog post by Oded Gal, Zoom’s chief product officer:

“In light of recent interest in our encryption practices, we want to start by apologizing for the confusion we have caused by incorrectly suggesting that Zoom meetings were capable of using end-to-end encryption…. While we never intended to deceive any of our customers, we recognize that there is a discrepancy between the commonly accepted definition of end-to-end encryption and how we were using it. This blog is intended to rectify that discrepancy and clarify exactly how we encrypt the content that moves across our network.”

The Attorney General of New York sent a letter to the company asking questions regarding its privacy shortcomings that allow Zoombombing and questioned its murky agreement with Facebook. That was just one of 26 letters the Zoom office has received from state attorney generals.

The problems keep coming. Some entities are dropping Zoom. Elon Musk banned SpaceX from using Zoom. Taiwan has banned it. Germany has restricted its usage.

A shareholder for Zoom is suing the company for overstating its encryption capabilities. Even local school districts, such as the Fairfax County Public Schools, have deemed the technology unsafe, and are experimenting with alternatives.

The Era of Self-Repair

Days after Vice’s report, Zoom changed codes that had shared user data with Facebook.

Zoom began allowing users to deactivate the Chinese server. By April 25, any user that had not expressly kept their data on the Chinese server was to be automatically removed from its data route.

Such an “opt-in” approach to data sharing is rare in the world of privacy.

And Zoom has been highly communicative about its blunders. Yuan has posted blogs repeatedly on his website updating users about security and new, common-sense features such as making security settings more prominent and reporting users.

He has also used his blogs to draw attention to the tools that have always existed for dealing with trolls, such as good cyber hygiene and tutorials for using the Zoom Waiting Room to vet join requests.

You can ask Zoom anything, as long as it’s on Zoom

Most notably, Zoom is hosting a series of weekly webinars since April 8 with Yuan himself, called “Ask Eric Anything.” He’s made himself as available as a CEO can be.

At one of the first of these webcasts, the majority of questions revolved around interface and troubleshooting, but some addressed security concerns.

For “the next 90 days,” Zoom will be “incredibly focused on enhancing our privacy and security,” promised Yuan.

See “Zoom CEO Eric Yuan Pledges to Address Security Shortcomings in ‘The Next 90 Days’,” Broadband Breakfast, April 20, 2020.

In fact, Zoom has branded itself around “The Next 90 Days,” where it has committed to focusing itself on solely privacy-related challenges.

Asked about the specifics of its efforts by Broadband Breakfast, a Zoom spokesperson said, “Together, I have no doubt we will make Zoom synonymous with safety and security.”

Zoom’s has also had a slew of conspicuous hires: Katie Moussouris, a cybersecurity expert who debugged Microsoft and the Pentagon; Leah Kissner, Google’s former head of privacy; and Alex Stamos, director of the Stanford Internet Observatory and Facebook’s former chief security officer.

During Stamos’ time at Facebook, he advocated greater disclosure around Russian interference on Facebook during the 2016 election. His insistence that Facebook do more created internal disagreements that eventually led to his departure.

“To successfully scale a video-heavy platform to such a size, with no appreciable downtime and in the space of weeks,” Stamos said in a blog post explaining his decision to temporarily leave Stanford and join Zoom, “is literally unprecedented in the history of the internet.”

He described the challenge as “too interesting to pass up.”

In the end, the problem that Zoom has faced isn’t specific to Zoom, but a human problem. The real challenge, as Stamos said, “is how to empower one’s customers without empowering those who wish to abuse them.”

Artificial Intelligence

Automated Content Moderation’s Main Problem is Subjectivity, Not Accuracy, Expert Says

With millions of pieces of content generated daily, platforms are increasingly relying on AI for moderation.

Published

on

Screenshot of American Enterprise Institute event

WASHINGTON, February 2, 2023 — The vast quantity of online content generated daily will likely drive platforms to increasingly rely on artificial intelligence for content moderation, making it critically important to understand the technology’s limitations, according to an industry expert.

Despite the ongoing culture war over content moderation, the practice is largely driven by financial incentives — so even companies with “a speech-maximizing set of values” will likely find some amount of moderation unavoidable, said Alex Feerst, CEO of Murmuration Labs, at a Jan. 25 American Enterprise Institute event. Murmuration Labs works with tech companies to develop online trust and safety products, policies and operations.

If a piece of online content could potentially lead to hundreds of thousands of dollars in legal fees, a company is “highly incentivized to err on the side of taking things down,” Feerst said. And even beyond legal liability, if the presence of certain content will alienate a substantial number of users and advertisers, companies have financial motivation to remove it.

However, a major challenge for content moderation is the sheer quantity of user-generated online content — which, on the average day, includes 500 million new tweets, 700 million Facebook comments and 720,000 hours of video uploaded to YouTube.

“The fully loaded cost of running a platform includes making millions of speech adjudications per day,” Feerst said.

“If you think about the enormity of that cost, very quickly you get to the point of, ‘Even if we’re doing very skillful outsourcing with great accuracy, we’re going to need automation to make the number of daily adjudications that we seem to need in order to process all of the speech that everybody is putting online and all of the disputes that are arising.’”

Automated moderation is not just a theoretical future question. In a March 2021 congressional hearing, Meta CEO Mark Zuckerberg testified that “more than 95 percent of the hate speech that we take down is done by an AI and not by a person… And I think it’s 98 or 99 percent of the terrorist content.”

Dealing with subjective content

But although AI can help manage the volume of user-generated content, it can’t solve one of the key problems of moderation: Beyond a limited amount of clearly illegal material, most decisions are subjective.

Much of the debate surrounding automated content moderation mistakenly presents subjectivity problems as accuracy problems, Feerst said.

For example, much of what is generally considered “hate speech” is not technically illegal, but many platforms’ terms of service prohibit such content. With these extrajudicial rules, there is often room for broad disagreement over whether any particular piece of content is a violation.

“AI cannot solve that human subjective disagreement problem,” Feerst said. “All it can do is more efficiently multiply this problem.”

This multiplication becomes problematic when AI models are replicating and amplifying human biases, which was the basis for the Federal Trade Commission’s June 2022 report warning Congress to avoid overreliance on AI.

“Nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement announcing the report. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off our hands.”

The FTC’s report pointed to multiple studies revealing bias in automated hate speech detection models, often as a result of being trained on unrepresentative and discriminatory data sets.

As moderation processes become increasingly automated, Feerst predicted that the “trend of those problems being amplified and becoming less possible to discern seems very likely.”

Given those dangers, Feerst emphasized the urgency of understanding and then working to resolve AI’s limitations, noting that the demand for content moderation will not go away. To some extent, speech disputes are “just humans being human… you’re never going to get it down to zero,” he said.

Continue Reading

Social Media

Must Internet Platforms Host Objectionable Content? Appeals Courts Consider ‘Must Carry’ Rules

Court decisions on Texas and Florida “must-carry” laws disagreed on whether online platforms should be regulated as common carriers.

Published

on

Photo of Reese Schonfeld, President of Cable News Network and Reynelda Nuse, weekend anchorwoman for CNN, stand at a set at the broadcast center in Atlanta in May 1980.

WASHINGTON, January 30, 2023 — As the Supreme Court prepares to hear a pair of cases about online platform liability, it is also considering a separate pair of social media lawsuits that aim to push content moderation practices in the opposite direction, adding additional questions about the First Amendment and common carrier status to an already complicated issue.

The “must-carry” laws in Texas and Florida, both aimed at limiting online content moderation, met with mixed decisions in appeals courts after being challenged by tech industry groups NetChoice and the Computer & Communications Industry Association. The outcomes will likely end up “affecting millions of Americans and their ability to express themselves online,” said Chris Marchese, counsel at NetChoice, at a Broadband Breakfast Live Online event on Wednesday.

In September, a federal appeals court in the Fifth Circuit upheld the Texas law, ruling that social media platforms can be regulated as “common carriers,” or required to carry editorial programming as were cable television operators in the Turner Broadcasting System v. FCC decisions from the 1990s.

Dueling appeals court interpretations

By contrast, the judges overturning the Florida ruling held that social media platforms are not common carriers. Even if they were, the 11th Circuit Court judges held, “neither law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier.”

Whether social media platforms should be treated like common carriers is “a fair question to ask,” said Marshall Van Alstyne, Questrom chair professor at Boston University. It would be difficult to reach a broad audience online without utilizing one of the major platforms, he claimed.

However, Marchese argued that in the Texas ruling, the Fifth Circuit “to put it politely, ignored decades of binding precedent.” First Amendment protections have previously been extended to “what we today might think of as common carriers,” he said.

“I think we can safely say that Texas and Florida do not have the ability to force our private businesses to carry political speech or any type of speech that they don’t see fit,” Marchese said.

Ari Cohn, free speech counsel at TechFreedom, disagreed with the common carrier classification altogether, referencing an amicus brief arguing that “social media and common carriage are irreconcilable concepts,” filed by TechFreedom in the Texas case.

Similar ‘must-carry’ laws are gaining traction in other states

While the two state laws have the same general purpose of limiting moderation, their specific restrictions differ. The Texas law would ban large platforms from any content moderation based on “viewpoint.” Critics have argued that the term is so vague that it could prevent moderation entirely.

“In other words, if a social media service allows coverage of Russia’s invasion of Ukraine, it would also be forced to disseminate Russian propaganda about the war,” Marchese said. “So if you allow conversation on a topic, then you must allow all viewpoints on that topic, no matter how horrendous those viewpoints are.”

The Florida law “would require covered entities — including ones that you wouldn’t necessarily think of, like Etsy — to host all or nearly all content from so-called ‘journalistic enterprises,’ which is basically defined as anybody who has a small following on the internet,” Marchese explained. The law also prohibits taking down any speech from political candidates.

The impact of the two cases will likely be felt far beyond those two states, as dozens of similar content moderation bills have already been proposed in states across the country, according to Ali Sternburg, vice president of information policy for the CCIA.

But for now, both laws are blocked while the Supreme Court decides whether to hear the cases. On Jan. 23, the court asked for the U.S. solicitor general’s input on the decision.

“I think this was their chance to buy time because in effect, so many of these cases are actually asking the court to do opposite things,” Van Alstyne said.

Separate set of cases calls for more, not less, moderation

In February, the Supreme Court will hear two cases that effectively argue the reverse of the Texas and Florida laws by alleging that social media platforms are not doing enough to remove harmful content.

The cases were brought against Twitter and Google by family members of terror attack victims, who argue that the platforms knowingly allowed terrorist groups to spread harmful content and coordinate attacks. One case specifically looks at YouTube’s recommendation algorithms, asking whether Google can be held liable for not only hosting but promoting terrorist content.

Algorithms have become “the new boogeyman” in ongoing technology debates, but they essentially act like mirrors, determining content recommendations based on what users have searched for, engaged with and said about themselves, Cohn explained.

Reese Schonfeld, President of Cable News Network and Reynelda Nuse, weekend anchorwoman for CNN, stand at one of the many sets at the broadcast center in Atlanta on May 31, 1980. The network, owned by Ted Turner, began it’s 24-hour-a-day news broadcasts on Sunday in the afternoon. (AP Photo/Joe Holloway used with permission.)

“This has been litigated in a number of different contexts, and in pretty much all of them, the courts have said we can’t impose liability for the communication of bad ideas,” Cohn said. “You hold the person who commits the wrongful act responsible, and that’s it. There’s no such thing as negligently pointing to someone to bad information.”

A better alternative to reforming Section 230 would be implementing “more disclosures and transparency specifically around how algorithms are developed and data about enforcement,” said Jessica Dheere, director of Ranking Digital Rights.

Social media platforms have a business incentive to take down terrorist content, and Section 230 is what allows them to do so without over-moderating, Sternberg said. “No one wants to see this horrible extremist content on digital platforms, especially the services themselves.”

Holding platforms liable for all speech that they carry could have a chilling effect on speech by motivating platforms to err on the side of removing content, Van Alstyne said.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, January 25, 2023, 12 Noon ET – Section 230, Google, Twitter and the Supreme Court

The Supreme Court will soon hear two blockbuster cases involving Section 230 of the Telecommunications Act: Gonzalez v. Google on February 21, and  Twitter v. Taamneh on February 22. Both of these cases ask if tech companies can be held liable for terrorist content on their platforms. Also in play: Laws in Florida and in Texas (both on hold during the course of litigation) that would limit online platforms’ ability to moderate content. In a recent brief, Google argued that denying Section 230 protections for platforms “could have devastating spillover effects.” In advance of Broadband Breakfast’s Big Tech & Speech Summit on March 9, this Broadband Breakfast Live Online event will consider Section 230 and the Supreme Court.

Panelists:

  • Chris Marchese, Counsel, NetChoice
  • Ari Cohn, Free Speech Counsel, TechFreedom
  • Jessica Dheere, Director, Ranking Digital Rights
  • Ali Sternburg, Vice President of Information Policy, Computer & Communications Industry Association
  • Marshall Van Alstyne, Questrom Chair Professor, Boston University
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

Chris Marchese analyzes technology-related legislative and regulatory issues at both the federal and state level. His portfolio includes monitoring and analyzing proposals to amend Section 230 of the Communications Decency Act, antitrust enforcement, and potential barriers to free speech and free enterprise on the internet. Before joining NetChoice in 2019, Chris worked as a law clerk at the U.S. Chamber Litigation Center, where he analyzed legal issues relevant to the business community, including state-court decisions that threatened traditional liability rules.

Ari Cohn is Free Speech Counsel at TechFreedom. A nationally recognized expert in First Amendment law, he was previously the Director of the Individual Rights Defense Program at the Foundation for Individual Rights in Education (FIRE), and has worked in private practice at Mayer Brown LLP and as a solo practitioner, and was an attorney with the U.S. Department of Education’s Office for Civil Rights. Ari graduated cum laude from Cornell Law School, and earned his Bachelor of Arts degree from the University of Illinois at Urbana-Champaign.

Jessica Dheere is the director of Ranking Digital Rights, and co-authored RDR’s spring 2020 report “Getting to the Source of Infodemics: It’s the Business Model.” An affiliate at the Berkman Klein Center for Internet & Society, she is also founder, former executive director, and board member of the Arab digital rights organization SMEX, and in 2019, she launched the CYRILLA Collaborative, which catalogs global digital rights law and case law. She is a graduate of Princeton University and the New School.

Ali Sternburg is Vice President of Information Policy at the Computer & Communications Industry Association, where she focuses on intermediary liability, copyright, and other areas of intellectual property. Ali joined CCIA during law school in 2011, and previously served as Senior Policy Counsel, Policy Counsel, and Legal Fellow. She is also an Inaugural Fellow at the Internet Law & Policy Foundry.

Marshall Van Alstyne (@InfoEcon) is the Questrom Chair Professor at Boston University. His work explores how IT affects firms, innovation, and society with an emphasis on business platforms. He co-authored the international best seller Platform Revolution and his research influence ranks among the top 2% of all scientists globally.

Drew Clark (moderator) is CEO of Breakfast Media LLC. He has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Section 230

Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case

Panelists disagreed over the merits of Section 230’s protections and the extent to which they apply.

Published

on

Screenshot of speakers at the Federalist Society webinar

WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending