Connect with us

Social Media

Seeking to Quell ‘Evil Contagion’ of ‘White Supremacy,’ President Trump May Ignite New Battle Over Online Hate Speech

Published

on

WASHINGTON, August 5, 2019 — President Donald Trump on Monday morning attempted to strike a tone of unity by denouncing the white, anti-Hispanic man who “shot and murdered 20 people, and injured 26 others, including precious little children.”

In speaking about the two significant mass shootings over the weekend in Texas and Ohio, Trump delivered prepared remarks in which he specifically denounced “racism, bigotry, and white supremacy,” and linked it to the “warp[ed] mind” of the racially-motivated El Paso killer.

That shooter – now in custody – posted a manifesto online before the shooting in which he said he was responding to the “Hispanic invasion of Texas.” The shooter cited the March 15, massacre of two mosques in Christchurch, New Zealand, as an inspiration for his action.

In White House remarks with Vice President Mike Pence standing at his side, Trump proposed solutions to “stop this evil contagion.” Trump denounced “hate” or “racist hate” four times.

Trump’s first proposed solution: “I am directing the Department of Justice to work in partnership with local, state, and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike.”

That proposal appeared to be an initiative that was either targeted at – or potentially an opportunity for collaboration with – social media giants like Twitter, Facebook and Google.

Indeed, Trump and others on the political right have repeatedly criticized these social media giants for bias against Trump and Republicans.

Sometimes, this right-wing criticism of Twitter emerges after a user is banned for violating the social media company’s terms of service against “hate speech.”

In Trump’s remarks, he also warned that “we must shine light on the dark recesses of the internet.” Indeed, Trump said that “the perils of the internet and social media cannot be ignored, and they will not be ignored.”

But it must be equally clear to the White House that the El Paso killer – in his online manifesto – used anti-Hispanic and anti-immigrant rhetoric very similar to Trump’s own repeated words about an “invasion” of Mexican and other Latin Americans at the United States border.

Hence this mass murder contains elements of political peril for both Donald Trump and for his frequent rivals at social media companies like Twitter, Facebook and Google.

8chan gets taken down by its network provider

Minutes before the El Paso attack at a Wal-Mart, a manifesto titled “The Inconvenient Truth” was posted to the online platform 8chan, claiming that the shooting was in response to the “Hispanic invasion.” The killer specifically cited the Christchurch shooter’s white supremacist manifesto as an inspiration.

As previously utilized by Islamic terrorists, social media platforms are increasingly being utilized by white supremacist terrorists. In addition to posting his manifesto online, the Christchurch shooter livestreamed his attack on Facebook.

In April, a man posted an anti-Semitic and white nationalist letter to the same online forum, 8chan, before opening fire at a synagogue near San Diego, California.

And on July 28, the gunman who killed three people at a garlic festival in Gilroy, California, allegedly promoted a misogynist white supremacist book on Instagram just prior to his attack.

But Saturday’s El Paso shooting motivated some companies to act. Cloudflare, 8chan’s network provider early on Monday morning pulled its support for 8chan, calling the platform a “cesspool of hate.”

“While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online,” wrote Cloudflare CEO Matthew Prince.

“It does nothing to address why mass shootings occur,” said Prince. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we’ve solved our own problem, but we haven’t solved the internet’s.”

Prince continued to voice his discomfort about the company taking the role of content arbitrator, and pointed to Europe’s attempts to have more government involvement.

The Christchurch massacre opened a dialogue between big tech and European critics of ‘hate speech’

Following the Christchurch attack, 18 governments in May signed the Christchurch Call pledge (PDF) seeking to stop the internet from being used as a tool by violent extremists. The U.S. did not sign on, and the White House voiced concerns that the document would violate the First Amendment.

Dubbed “The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online,” the May document included commitments by both online service providers, and by governments.

Among other measures, the online providers were to “[t]ake transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media.”

Governments were to “[e]nsure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content.”

Although Silicon Valley has had a reputation for supporting a libertarian view of free speech, the increasingly unruly world of social media over the past decade has put that First Amendment absolutism to the test.

Indeed, five big tech giants – Google, Amazon, Facebook, Twitter and Microsoft – voiced their support from the Christchurch Call on the day of its release.

In particular, they took responsibility for the apparent restrictions on freedom of speech that the Christchurch Call would impose, saying that the massacre was “a horrifying tragedy” that made it “right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence.”

In particular, they noted that the Christchurch Call expands on the Global Internet Forum to Counter Terrorism set up by Facebook, Google’s YouTube, Microsoft and Twitter in the summer of 2017.

The objective of this organization is focused on disrupting terrorists’ ability to promote terrorism, disseminate violent propaganda, and exploit or glorify real-world acts of violence.

The tech giants said (PDF) that they were sharing more information about how they could “detect and remove this content from our services, updates to our individual terms of use, and more transparency for content policies and removals.”

Will Trump politicize the concept of ‘hate speech’ that tech companies are uniting with Europe to take down?

In his Monday statement commenting on an ostensible partnership between the Justice Department and the social media companies, Trump referred to the need to the need to “detect mass shooters before they strike.”

And he had this specific example: “As an example, the monster in the Parkland high school in Florida had many red flags against him, and yet nobody took decisive action. Nobody did anything. Why not?”

Part of the challenge now faced by social media companies is frankly political. Although Twitter has taken aggressive steps to eradicate ISIS content from its platform, it has not applied the same tools and algorithms to take down white supremacist content.

Society accepts the risk of inconveniencing potentially related accounts, such as those of Arabic language broadcasters for the benefit of banning ISIS content, Motherboard summarized earlier this year based its interview with Twitter employees.

But if these same aggressive tactics were deployed against white nationalist terrorism, the algorithms would likely flag content from prominent Republican politicians, far-right commentators – and Donald Trump himself, these employees said.

Indeed, right after declining to sign the Christchurch call, the White House escalated its war against American social media by announcing a campaign asking internet users to share stories of when they felt censored by Facebook, Twitter and Google’s YouTube.

And in June, Twitter made it clear that they were speaking directly about Tweets that violated their terms of service by prominent public officials, including the president.

“In the past, we’ve allowed certain Tweets that violated our rules to remain on Twitter because they were in the public’s interest, but it wasn’t clear when and how we made those determinations,” a Twitter official said. “To fix that, we’re introducing a new notice that will provide additional clarity in these situations, and sharing more on when and why we’ll use it.”

White House officials did not immediately respond to whether the Trump administration was reconsidering its opposition to the Christchurch Call.

Will Trump’s speech put others in the spotlight, or keep it on him and his rhetoric?

In additional to highlighting the anticipated effort with social media, Trump had four additional suggested “bipartisan solutions” to the “evil contagion” caused by the Texas and Ohio mass shootings.

They including “stop[ing] the glorification of violence in our society” in video games, addressing mental health laws “to better identify mentally disturbed individuals,” keeping firearms from those “judged to pose a grave risk to public safety,” and seeking the death penalty against those who commit hate crimes and mass murders.

Trump’s advisers said that they hoped the speech would stem the tide of media attention being given to the links between his frequent use of dehumanizing language to describe Latin American immigrants.

As he delivered his prepared remarks from a TelePrompTer in a halting cadence, Trump appeared to be reading the speech for the first time. This led to an awkward moment when he suggested that the second shooting of the weekend – which had taken place outside a Dayton, Ohio bar – had been in Toledo, Ohio.

But despite displaying the visible discomfiture that is evident when he reads prepared remarks to the White House press pool cameras, Trump made an attempt to silence critics like former El Paso Congressman Beto O’Rourke – who just hours before had explicitly called the President a white nationalist – by calling for defeat of “sinister ideologies” of hate.

“In one voice, our nation must condemn racism, bigotry, and white supremacy,” Trump said. “Hate has no place in America. Hatred warps the mind, ravages the heart, and devours the soul.”

Trump did not elaborate on the hate-based motivations of the El Paso shooter. Rather than reflect on where the El Paso shooter may have gotten the idea that Hispanics were “invading” the United States, Trump cast blame on one of the targets often invoked by conservatives after such mass shootings, including video games.

Although Trump has previously delivered remarks in the aftermath of violent acts committed by white supremacists and white nationalists during his presidency, Monday’s speech marked the first time that the President had chosen to specifically condemn “white supremacy,” rather than deliver a more general condemnation of “hate.”

In his rhetoric, both on his Twitter account and on the campaign trail, Trump uses non-whites as a foil, beginning with his 2015 campaign announcement speech, in which he described Mexican immigrants as “rapists” who bring crime and drugs to America.

That rhetoric reappeared in the 2018 Congressional elections as Trump spoke about an “invasion” from South and Central America taking up a significant portion of his rally stump speech.

As the 2020 election draws nearer, Trump’s strategy this campaign seems to similarly demonize racial minorities and prominent Democrats of color, most recently Rep. Elijah Cummings, D-Md., the chairman of the House Oversight Committee.

Trump critics not appeased by his Monday speech

As a result, commentators said Monday’s condemnation of white supremacy marked a 180-degree turn for the President. But his performance did not leave many observers convinced of his sincerity.

House Homeland Security Committee Chairman Bennie Thompson, D-Miss., called the President’s speech “meaningless.”

“We know tragedy after tragedy his words have not led to solid action or any change in rhetoric. We know his vile and racist words have incited violence and attacks on Americans,” he said in a statement. “Now dozens are dead and white supremacist terrorism is on the rise and is now our top domestic terrorism threat.”

Sen. Ron Wyden, D-Ore., tweeted that Trump had “addressed the blaze today with the equivalent of a water balloon” after “fanning the flames of white supremacy for two-and-a-half years in the White House.”

Ohio Democratic Party Chairman David Pepper said Trump’s condemnation of white supremacy in Monday’s remarks could not make up for his years of racist campaign rhetoric.

“Through years of campaigning and hate rallies, to now say ‘I’m against hateful people and racism,’ is just hard to listen to,” Pepper said during a phone interview.

“Unless he’s willing to say ‘I know I’ve been a part of it’ with a full apology and some self recognition, it felt like he was just checking the boxes.”

Pepper suggested that Trump “was saying what someone told him to say,” and predicted that Trump would soon walk back his remarks, much as he did after the 2017 “Unite the Right” white supremacist rally in Virginia.

Charlie Sykes, a former conservative talk radio host and editor of “The Bulwark,” echoed Pepper’s sentiments in a separate phone interview, but also called out Trump for failing to speak of the El Paso shooter’s motivations.

“It was so perfunctory and inadequate because he condemned the words ‘bigotry and racism,’ but he didn’t describe what he was talking about,” Sykes said.

Sykes criticized Trump for failing to take responsibility for his routine use of racist rhetoric, including descriptions of immigrants as “invaders” who “infest” the United States.

“Unless you’re willing to discuss the dehumanization behind the crimes, the invocation of certain words doesn’t change anything.”

Another longtime GOP figure who Trump failed to impress was veteran strategist Rick Wilson, who cited it as yet the latest example of “the delta between Trump on the TelePrompTer and Trump at a rally,” a difference he described as “enormous.”

“Nothing about that speech had a ring of authenticity to it,” said Rick Wilson, a legendary GOP ad maker and the author of “Everything Trump Touches Dies.”

“The contrast between the speechwriter’s handiwork and the real Donald Trump…is rather marked,” he said.

Where does online free speech – and allegations of ‘hate crimes’ – go from here?

Although the social media companies are making more efforts to harness and expunge online hate, they are unlikely to be able to get very far without someone – perhaps even President Trump – crying foul.

Putting the politics of online hate speech aside, the U.S. does take a fundamentally different approach to freedom of expression than does Europe.

According to the Human Rights Watch, hundreds of French citizens are convicted for “apologies for terrorism” each year, which includes any positive comment about a terrorist or terrorist organization. Online offenses are treated especially harshly.

By contrast, the U.S. has a fundamental commitment to the freedom of speech—including speech that is indecent, offensive, and hateful.

The Supreme Court has ruled that speech is unprotected when it is “directed to inciting or producing imminent lawless action” and is “likely to incite or produce such action.”

But this exception is extremely narrow—in Brandenburg v. Ohio, the Court reversed the conviction of a KKK group that advocated for violence as a means of political reform, arguing that their statements did not express an immediate intent to do violence.

The limitations on government leave the responsibility of combating online extremism to the digital platforms themselves, said Open Technology Institute Director Sarah Morris at a panel last month.

“In general, private companies have a lot more flexibility in how they respond to terrorist propaganda than Congress does,” said Emma Llansó, Director of the Free Expression Project at the Center for Democracy & Technology. “They need to be clear about what their policies are and enforce them transparently.”

But companies also need to carefully consider how they will respond to pressure from governments and individuals around the world, said Llansó, adding that “no content policy or community guideline is ever applied just in the circumstances it was designed for.”

“As the experience of social media companies has shown us, content moderation is extremely difficult to do well,” Llansó concluded. “It requires an understanding of the context that the speaker and the audience are operating in, which a technical infrastructure provider is not likely to have.”

(Managing Editor Andrew Feinberg and Reporter Emily McPhie contributed reporting to this article. Photo of Vice President Pence beside Trump speaking on August 5, 2019, from the White House.)

Breakfast Media LLC CEO Drew Clark has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

Social Media

Congress Grills TikTok CEO Over Risks to Youth Safety and China

House lawmakers presented a united front against TikTok as calls for a national ban gain momentum.

Published

on

Screenshot of TikTok CEO Shou Chew courtesy of CSPAN

WASHINGTON, March 24, 2023 — TikTok CEO Shou Zi Chew faced bipartisan hostility from House lawmakers during a high-profile hearing on Thursday, struggling to alleviate concerns about the platform’s safety and security risks amid growing calls for the app to be banned from the United States altogether.

For more than five hours, members of the House Energy and Commerce Committee lobbed criticisms at TikTok, often leaving Chew little or no time to address their critiques.

“TikTok has repeatedly chosen the path for more control, more surveillance and more manipulation,” Chair Cathy McMorris Rodgers, R-Wash., told Chew at the start of the hearing. “Your platform should be banned. I expect today you’ll say anything to avoid this outcome.”

“Shou came prepared to answer questions from Congress, but, unfortunately, the day was dominated by political grandstanding,” TikTok spokesperson Brooke Oberwetter said in a statement after the hearing.

In a viral TikTok video posted Tuesday, and again in his opening statement, Chew noted that the app has over 150 million active monthly users in the United States. TikTok has also become a place where “close to 5 million American businesses — mostly small businesses — go to find new customers and to fuel their growth,” he said.

But McMorris Rodgers argued that the platform’s significant reach only “emphasizes the urgency for Congress to act.”

Lawmakers condemn TikTok’s impact on youth safety and mental health

One of the top concerns highlighted by both Republicans and Democrats was the risk TikTok poses to the wellbeing of children and teens.

“Research has found that TikTok’s addictive algorithms recommend videos to teens that create and exacerbate feelings of emotional distress, including videos promoting suicide, self-harm and eating disorders,” said Ranking Member Frank Pallone, D-N.J.

Chew emphasized TikTok’s commitment to removing explicitly harmful or violative content. The company is also working with entities such as the Boston Children’s Hospital to find models for content that might harm young viewers if shown too frequently, even if the content is not inherently negative — for example, videos of extreme fitness regimens, Chew explained.

In addition, Chew listed several safeguards that TikTok has recently implemented for underage users, such as daily default time limits and the prevention of private messaging for users under 16.

However, few lawmakers seemed interested in these measures, with some noting that they appeared to lack enforceability. Others emphasized the tangible costs of weak safety policies, pointing to multiple youth deaths linked to the app.

Rep. Gus Bilirakis, R-Fla., shared the story of a 16-year-old boy who died by suicide after being served hundreds of TikTok videos glorifying suicidal ideation, self-harm and depression — even though such content was unrelated to his search history, according to a lawsuit filed by his parents against the platform.

At the hearing, Bilirakis underscored his concern by playing a series of TikTok videos with explicit descriptions of suicide, accompanied by messages such as “death is a gift” and “Player Tip: K!ll Yourself.”

“Your company destroyed their lives,” Bilirakis told Chew, gesturing toward the teen’s parents. “Your technology is literally leading to death, Mr. Chew.”

Watch Rep. Bilirakis’ keynote address from the Big Tech & Speech Summit.

Other lawmakers noted that this death was not an isolated incident. “There are those on this committee, including myself, who believe that the Chinese Communist Party is engaged in psychological warfare through Tik Tok to deliberately influence U.S. children,” said Rep. Buddy Carter, R-Ga.

TikTok CEO emphasizes U.S. operations, denies CCP ties

Listing several viral “challenges” encouraging dangerous behaviors and substance abuse, Carter questioned why TikTok “consistently fails to identify and moderate these kinds of harmful videos” — and claimed that no such content was present on Douyin, the version of the app available in China.

Screenshot of Rep. Buddy Carter courtesy of CSPAN

Chew urged legislators to compare TikTok’s practices with those of other U.S. social media companies, rather than a version of the platform operating in an entirely different regulatory environment. “This is an industry challenge for all of us here,” he said.

Douyin heavily restricts political and controversial content in order to comply with China’s censorship regime, while the U.S. currently grants online platforms broad liability for third-party content.

In response to repeated accusations of CCP-driven censorship, particularly regarding the Chinese government’s human rights abuses against the Uyghur population, Chew maintained that related content “is available on our platform — you can go and search it.”

“We do not promote or remove content at the request of the Chinese government,” he repeatedly stated.

A TikTok search for “Uygher genocide” on Thursday morning primarily displayed videos that were critical of the Chinese government, Broadband Breakfast found. The search also returned a brief description stating that China “has committed a series of ongoing human rights abuses against Uyghers and other ethnic and religious minorities,” drawn from Wikipedia and pointing users to the U.S.-based website’s full article on the topic.

TikTok concerns bolster calls for Section 230 reform

Although much of the hearing was specifically targeted toward TikTok, some lawmakers used those concerns to bolster an ongoing Congressional push for Section 230 reform.

“Last year, a federal judge in Pennsylvania found that Section 230 protected TikTok from being held responsible for the death of a 10-year-old girl who participated in a blackout challenge,” said Rep. Bob Latta, R-Ohio. “This company is a picture-perfect example of why this committee in Congress needs to take action immediately to amend Section 230.”

In response, Chew referenced Latta’s earlier remarks about Section 230’s historical importance for online innovation and growth.

“As you pointed out, 230 has been very important for freedom of expression on the internet,” Chew said. “[Free expression] is one of the commitments we have given to this committee and our users, and I do think it’s important to preserve that. But companies should be raising the bar on safety.”

Rep. John Curtis, R-Utah., asked if TikTok’s use of algorithmic recommendations should forfeit the company’s Section 230 protections — echoing the question at the core of Gonzalez v. Google, which was argued before the Supreme Court in February.

Other inquiries were more pointed. Chew declined to answer a question from Rep. Randy Weber, R-Texas, about whether “censoring history and historical facts and current events should be protected by Section 230’s good faith requirement.”

Weber’s question seemed to incorrectly suggest that the broad immunity provided by Section 230 (c)(1) is conditioned on the “good faith” referenced in in part (c)(2)(A) of the statute.

Ranking member says ongoing data privacy initiative is unacceptable

Chew frequently pointed to TikTok’s “Project Texas” initiative as a solution to a wide range of data privacy concerns. “The bottom line is this: American data, stored on American soil, by an American company, overseen by American personnel,” he said.

All U.S. user data is now routed by default to Texas-based company Oracle, Chew added, and the company aims to delete legacy data currently stored in Virginia and Singapore by the end of the year.

Several lawmakers pointed to a Thursday Wall Street Journal article in which China’s Commerce Ministry reportedly said that a sale of TikTok would require exporting technology, and therefore would be subject to approval from the Chinese government.

When asked if Chinese government approval was required for Project Texas, Chew replied, “We do not believe so.”

But many legislators remained skeptical. “I still believe that the Beijing communist government will still control and have the ability to influence what you do, and so this idea — this ‘Project Texas’ — is simply not acceptable,” Pallone said.

Continue Reading

Free Speech

Additional Content Moderation for Section 230 Protection Risks Reducing Speech on Platforms: Judge

People will migrate from platforms with too stringent content moderation measures.

Published

on

By

Photo of Douglas Ginsburg by Barbara Potter/Free to Choose Media

WASHINGTON, March 13, 2023 – Requiring companies to moderate more content as a condition of Section 230 legal liability protections runs the risk of alienating users from platforms and discouraging communications, argued a judge of the District of Columbia Court of Appeal last week.

“The criteria for deletion are vague and difficult to parse,” Douglas Ginsburg, a Ronald Reagan appointee, said at a Federalist Society event on Wednesday. “Some of the terms are inherently difficult to define and policing what qualifies as hate speech is often a subjective determination.”

“If content moderation became very rigorous, it is obvious that users would depart from platforms that wouldn’t run their stuff,” Ginsburg added. “And they will try to find more platforms out there that will give them a voice. So, we’ll have more fragmentation and even less communication.”

Ginsburg noted that the large technology platforms already moderate a massive amount of content, adding additional moderation would be fairly challenging.

“Twitter, YouTube and Facebook  remove millions of posts and videos based on those criteria alone,” Ginsburg noted. “YouTube gets 500 hours of video uploaded every minute, 3000 minutes of video coming online every minute. So the task of moderating this is obviously very challenging.”

John Samples, a member of Meta’s Oversight Board – which provides direction for the company on content – suggested Thursday that out-of-court dispute institutions for content moderation may become the preferred method of settlement.

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples.

“It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world, referring to the European Union’s Digital Services Act that went into effect in November of 2022, which requires platforms to remove illegal content and ensure that users can contest removal of their content.

Continue Reading

Section 230

Section 230 Shuts Down Conversation on First Amendment, Panel Hears

The law prevents discussion on how the first amendment should be applied in a new age of technology, says expert.

Published

on

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.  

Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms. 

We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products. 

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said. 

Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation. 

Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued. 

Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said. 

Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users. 

It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said. 

To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.  

It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman. 

This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.   

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending