Connect with us

Social Media

White House on Friday to Host Social Media Officials to Discuss Violent Extremism

Published

on

WASHINGTON, August 7, 2019 – The White House on Friday will host a meeting to bring together administration officials and technology executives to discuss ways to combat violent extremism on the internet, a senior administration official told Breakfast Media on Wednesday.

“We have invited internet and technology companies for a discussion of violent extremism online,” the official said.

The official stressed that the meeting would led at the staff level with select senior White House officials in attendance “along with representatives from a range of companies.”

The Trump administration’s newfound interest in combatting online extremism comes in the wake of last weekend’s mass shooting at an El Paso, Texas Wal-Mart, which claimed the lives of 22 people.

The alleged perpetrator, a 21-year-old white nationalist, posted an online manifesto rife with anti-Hispanic and anti-immigrant rhetoric which closely tracked Trump’s own repeated words about an “invasion” of Mexican and other Latin Americans at the United States border.

The manifesto, entitled “The Inconvenient Truth,” was posted to the online platform 8chan. In it, the alleged perpetrator claimed that the shooting was in response to the “Hispanic invasion” of Texas.

Last weekend’s shooting came less than six months after another alleged mass shooter based in Christchurch, New Zealand, posted a similarly racist manifesto to 8chan before he shot and killed 51 people at two mosques.

While he did not address his own rhetoric’s role in inspiring the El Paso shooter in prepared remarks delivered on Monday, Trump did attempt to place some measure of blame for the shooting on the internet, which he said “has provided a dangerous avenue to radicalize disturbed minds and perform demented acts.”

“We must shine light on the dark recesses of the Internet, and stop mass murders before they start,” he said.

“The perils of the Internet and social media cannot be ignored, and they will not be ignored.”

(Illustration by Melissa Joskow for Media Matters.)

Andrew Feinberg is the White House Correspondent and Managing Editor for Breakfast Media. He rejoined BroadbandBreakfast.com in late 2016 after working as a staff writer at The Hill and as a freelance writer. He worked at BroadbandBreakfast.com from its founding in 2008 to 2010, first as a Reporter and then as Deputy Editor. He also covered the White House for Russia's Sputnik News from the beginning of the Trump Administration until he was let go for refusing to use White House press briefings to promote conspiracy theories, and later documented the experience in a story which set off a chain of events leading to Sputnik being forced to register under the Foreign Agents Registration Act. Andrew's work has appeared in such publications as The Hill, Politico, Communications Daily, Washington Internet Daily, Washington Business Journal, The Sentinel Newspapers, FastCompany.TV, Mashable, and Silicon Angle.

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Social Media

Members of Congress Request Facebook Halt ‘Instagram For Kids’ Plan Following Mental Health Research Report

Letter follows Wall Street Journal story that reports Facebook knew about mental health damage Instagram has on teens.

Published

on

WASHINGTON, September 15, 2021 – Members of Congress have sent a letter Wednesday to Facebook CEO Mark Zuckerberg urging the company to stop its plan to launch a new platform for kids, following a report by the Wall Street Journal that cites company documents that reportedly shows the company knows its platforms harm the mental health of teens.

The letter, signed by Edward Markey, D-Massachusetts, Kathy Castor, D-Florida, and Lori Trahan, D-Massachusetts, also asks Facebook to provide answers by October 6 to questions including whether the company has, and who, reviewed the mental health research as cited in the Journal report; whether the company will agree to abandon plans to launch a new platform for children or teens; and when the company will begin studying its platforms’ impact on the kids’ mental health.

The letter also demands an update on the company’s plans for new products targeting children or teens, asks for copies of internal research regarding the mental health of this demographic, and copies of any external research the company has commissioned or accessed related to this matter.

The letter cites the Journal’s September 14 story, which reports that the company has spent the past three years conducting studies into how photo-sharing app Instagram, which Facebook owns, affects millions of young users, and found that the app is “harmful for a sizable percentage of them, most notably teenage girls.” The story uses the story of a teen who had to see a therapist due to an eating disorder due to exposure to images of other users’ bodies.

The story also cites a presentation that said teens were blaming Instagram for anxiety, depression, and the desire to kill themselves.

The head of Instagram, Adam Mosseri, told the Journal that research on mental health was valuable and that Facebook was late to realizing the drawback of connecting large swatch of people, according to the story. But he added that there’s “a lot of good that comes with what we do.”

Facebook told Congress it was planning ‘Instagram for kids’

Back in March, during a congressional hearing about Big Tech’s influence, Zuckerberg said Instagram was in the planning stages of building an “Instagram for kids.” Instagram itself does not allow kids under 13 to use the app.

On April 5, Markey, Castor and Trahan penned their names on another letter to Zuckerberg, which expressed concerns about the plan. “Children are a uniquely vulnerable population online, and images of kids are highly sensitive data,” the April letter said. “Facebook has an obligation to ensure that any new platforms or projects targeting children put those users’ welfare first, and we are skeptical that Facebook is prepared to fulfil this obligation.”

The plan was also met with opposition from the Campaign for a Commercial-Free Childhood, the Center for Humane Technology, Common Sense Media, and the Center for Digital Democracy, who said the app “preys on their fear of missing out as their ravenous desire for approval by peers exploits their developmental growth.

“The platform’s relentless focus on appearance, self-presentation, and branding presents challenges to adolescents’ privacy and well-being,” the opponents said. “Younger children are even less developmentally equipped to deal with these challenges, as they are learning to navigate social interactions, friendships, and their inner sense of strengths during this crucial window of development.”

At the March hearing, Zuckerberg, however, claimed that social apps to connect other people can have positive mental health benefits.

And then in August, Sens. Richard Blumenthal, D-Connecticut, and Marsha Blackburn, R-Tennessee, sent a letter to Zuckerberg asking for their research on mental health. Facebook responded without the company’s research, but said there are challenges with doing such research, the Journal said. “We are not aware of a consensus among studies or experts about how much screen time is ‘too much,’” according to the Journal, citing the response letter to the senators.

Continue Reading

China

Experts Raise Alarm About China’s App Data Aggregation Potential

The Communist government has access to a vast trove from Chinese-made apps.

Published

on

Former Commerce aide and professor at Texas A&M University, Margaret Peterlin

WASHINGTON, September 2, 2021 – Social media app TikTok’s rise as one of the world’s top downloaded software is concerning experts who say the aggregate data collected across a number of Chinese-made apps will allow the Communist government to get ahead of any federal action to stem the data flow.

In June, President Joe Biden signed an executive order that revoked a Trump direction that sought to ban TikTok and replaced it with criteria for the Commerce Department to evaluate the risks of said apps connected to foreign adversaries. The Trump administration even pressured TikTok to sell its U.S. business, but that never materialized.

On a webinar hosted by the Federalist Society on Thursday, panelists said the U.S. government may already be behind on the race to contain data collection and prevent it from getting into Chinese hands, who are creating advanced artificial intelligence using the data.

Margaret Peterlin, a lawyer, former Commerce Department aide and professor at the school of public service at Texas A&M University, said her concern with Biden’s executive order is whether it’s “strategically responsive” to what the Chinese government intends to do with all these sources of data – WeChat, TikTok, AliExpress, and its massive influence in telecommunications with Huawei and ZTE.

She noted that the Communist government has been very clear about its direction – that it wants to dominate and develop its data aggregation prowess to develop advanced artificial technologies. She illustrated this by using the example of how government uses advanced identification technologies and surveillance to monitor the Uyghur minority.

Peterlin also raised the issue of Chinese telecommunications companies like Huawei and ZTE, which have been the subject of restrictions from the Biden administration and the Federal Communications Commission in recent months. But she noted that Huawei is still in involved with regional carriers in the United States.

The FCC has addressed this concern by offering to compensate carriers to “rip and replace” that risky equipment. (Part of Huawei’s attraction is it’s relative low cost compared to its European rivals, for example.)

She noted that 5G “isn’t just another G” because there are many more connection and data points. Due to the promised lower latency, critical infrastructure like power grids and dams and even medical devices, can be controlled over the next-generation networks. Peterlin said these points of connection cannot be something the Chinese can get access to.

For Jamil Jaffer, founder and executive director of the National Security Institute, his concern is the pace at which the Chinese government is moving. “I worry that it’s very late in the process to be getting to this point, and they’re well ahead of us,” he said, speaking on China’s growing influence in the technology space.

Jennifer Hay, senior director for national security programs at DataRobot, a company that develops AI software, said Biden’s executive order should be able to expand to other platforms and empower the Commerce Department to look into what is going on behind the scenes on these applications.

She said the government needs to be able to make educated decisions about who’s using Americans’ data and what that data is being used for.

Hay even suggested that Congress step in and draft legislation on this kind of data collection, but Jaffer disagreed on the grounds that not only would a slow-moving government not be able to keep up with the rapid movement of technology, but legislation may impede business. He said this type of work is best left to the private sector to figure out.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending