Connect with us

Social Media

Automated Social Media Moderation In Focus Following Allegations Of Censorship

Panelists say they’ve been censored on social media — and they point to platforms’ auto moderation.

Published

on

June 2, 2021–Social media platforms that have automated moderation policies have been wittingly or unwittingly censoring legitimate speech, according to activists, with those corporate tools coming into focus following last month’s violence in the Middle East.

Platforms like Facebook and its subsidiary Instagram, as well as others, have moderation systems that automatically flag and remove posts that may encourage hate speech or violence.

But those systems has been taking down, blocking and censoring content from Palestinians, made evident as violence erupted between Israelis and Palestinians last month and continues today, according to a panel hosted by the Middle East Institute Wednesday.

Words have been incorrectly misinterpreted as terrorist speech, flagged and removed, the panelists say. Middle East policy analyst Marwa Fatafta used the example of the erroneous association of Islam’s third-holiest mosque, Al-Aqsa in Jerusalem, to a terrorist organization. This error led to blocked hashtags, removed users, and deleted posts, and Facebook’s response was that it was just a “technical glitch.”

“Their machines are blind to the vital context,” Fatafta said. “This is not unique to the Palestinians. This is bad news to all aspects of social justice.”

Palestinians have said that their perspective has not been reflected adequately in traditional media, and they have taken to social media as a way to get their message across.

The discussion comes as conversations heat up about possible reforms to Section 230, the legal provision governing platform liability for what users posts.

In a time of such violence, Fatafta explains this is a profound problem from a human rights perspective that needs to be addressed immediately by these large companies. She said the danger of the power being given to these big tech companies is that hey can choose the narrative they want the world to hear, and censor what they deem unacceptable.

Ignacio Delgado Culebras, a journalist covering the Middle East and North Africa, said there needs to be more transparency with these social media platforms. He explained we are still left in the dark about how companies make these decisions and who they consult with, and thousands of requests over the years to adjust the community standards have been denied.

“These are ultimately human policy decisions, and they can be addressed or reversed,” said Eliza Campbell, an associate director at the Middle East Institute. “These are systems that we chose, and we can choose to reconsider them, and hopefully, that will be something we can see going forward.”

Reporter Sophie Draayer, a native Las Vegan, studied strategic communication and political science at the University of Utah. In her free time, she plays mahjong, learns new songs on the guitar, and binge-watches true-crime docuseries on Netflix.

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Social Media

Members of Congress Request Facebook Halt ‘Instagram For Kids’ Plan Following Mental Health Research Report

Letter follows Wall Street Journal story that reports Facebook knew about mental health damage Instagram has on teens.

Published

on

WASHINGTON, September 15, 2021 – Members of Congress have sent a letter Wednesday to Facebook CEO Mark Zuckerberg urging the company to stop its plan to launch a new platform for kids, following a report by the Wall Street Journal that cites company documents that reportedly shows the company knows its platforms harm the mental health of teens.

The letter, signed by Edward Markey, D-Massachusetts, Kathy Castor, D-Florida, and Lori Trahan, D-Massachusetts, also asks Facebook to provide answers by October 6 to questions including whether the company has, and who, reviewed the mental health research as cited in the Journal report; whether the company will agree to abandon plans to launch a new platform for children or teens; and when the company will begin studying its platforms’ impact on the kids’ mental health.

The letter also demands an update on the company’s plans for new products targeting children or teens, asks for copies of internal research regarding the mental health of this demographic, and copies of any external research the company has commissioned or accessed related to this matter.

The letter cites the Journal’s September 14 story, which reports that the company has spent the past three years conducting studies into how photo-sharing app Instagram, which Facebook owns, affects millions of young users, and found that the app is “harmful for a sizable percentage of them, most notably teenage girls.” The story uses the story of a teen who had to see a therapist due to an eating disorder due to exposure to images of other users’ bodies.

The story also cites a presentation that said teens were blaming Instagram for anxiety, depression, and the desire to kill themselves.

The head of Instagram, Adam Mosseri, told the Journal that research on mental health was valuable and that Facebook was late to realizing the drawback of connecting large swatch of people, according to the story. But he added that there’s “a lot of good that comes with what we do.”

Facebook told Congress it was planning ‘Instagram for kids’

Back in March, during a congressional hearing about Big Tech’s influence, Zuckerberg said Instagram was in the planning stages of building an “Instagram for kids.” Instagram itself does not allow kids under 13 to use the app.

On April 5, Markey, Castor and Trahan penned their names on another letter to Zuckerberg, which expressed concerns about the plan. “Children are a uniquely vulnerable population online, and images of kids are highly sensitive data,” the April letter said. “Facebook has an obligation to ensure that any new platforms or projects targeting children put those users’ welfare first, and we are skeptical that Facebook is prepared to fulfil this obligation.”

The plan was also met with opposition from the Campaign for a Commercial-Free Childhood, the Center for Humane Technology, Common Sense Media, and the Center for Digital Democracy, who said the app “preys on their fear of missing out as their ravenous desire for approval by peers exploits their developmental growth.

“The platform’s relentless focus on appearance, self-presentation, and branding presents challenges to adolescents’ privacy and well-being,” the opponents said. “Younger children are even less developmentally equipped to deal with these challenges, as they are learning to navigate social interactions, friendships, and their inner sense of strengths during this crucial window of development.”

At the March hearing, Zuckerberg, however, claimed that social apps to connect other people can have positive mental health benefits.

And then in August, Sens. Richard Blumenthal, D-Connecticut, and Marsha Blackburn, R-Tennessee, sent a letter to Zuckerberg asking for their research on mental health. Facebook responded without the company’s research, but said there are challenges with doing such research, the Journal said. “We are not aware of a consensus among studies or experts about how much screen time is ‘too much,’” according to the Journal, citing the response letter to the senators.

Continue Reading

China

Experts Raise Alarm About China’s App Data Aggregation Potential

The Communist government has access to a vast trove from Chinese-made apps.

Published

on

Former Commerce aide and professor at Texas A&M University, Margaret Peterlin

WASHINGTON, September 2, 2021 – Social media app TikTok’s rise as one of the world’s top downloaded software is concerning experts who say the aggregate data collected across a number of Chinese-made apps will allow the Communist government to get ahead of any federal action to stem the data flow.

In June, President Joe Biden signed an executive order that revoked a Trump direction that sought to ban TikTok and replaced it with criteria for the Commerce Department to evaluate the risks of said apps connected to foreign adversaries. The Trump administration even pressured TikTok to sell its U.S. business, but that never materialized.

On a webinar hosted by the Federalist Society on Thursday, panelists said the U.S. government may already be behind on the race to contain data collection and prevent it from getting into Chinese hands, who are creating advanced artificial intelligence using the data.

Margaret Peterlin, a lawyer, former Commerce Department aide and professor at the school of public service at Texas A&M University, said her concern with Biden’s executive order is whether it’s “strategically responsive” to what the Chinese government intends to do with all these sources of data – WeChat, TikTok, AliExpress, and its massive influence in telecommunications with Huawei and ZTE.

She noted that the Communist government has been very clear about its direction – that it wants to dominate and develop its data aggregation prowess to develop advanced artificial technologies. She illustrated this by using the example of how government uses advanced identification technologies and surveillance to monitor the Uyghur minority.

Peterlin also raised the issue of Chinese telecommunications companies like Huawei and ZTE, which have been the subject of restrictions from the Biden administration and the Federal Communications Commission in recent months. But she noted that Huawei is still in involved with regional carriers in the United States.

The FCC has addressed this concern by offering to compensate carriers to “rip and replace” that risky equipment. (Part of Huawei’s attraction is it’s relative low cost compared to its European rivals, for example.)

She noted that 5G “isn’t just another G” because there are many more connection and data points. Due to the promised lower latency, critical infrastructure like power grids and dams and even medical devices, can be controlled over the next-generation networks. Peterlin said these points of connection cannot be something the Chinese can get access to.

For Jamil Jaffer, founder and executive director of the National Security Institute, his concern is the pace at which the Chinese government is moving. “I worry that it’s very late in the process to be getting to this point, and they’re well ahead of us,” he said, speaking on China’s growing influence in the technology space.

Jennifer Hay, senior director for national security programs at DataRobot, a company that develops AI software, said Biden’s executive order should be able to expand to other platforms and empower the Commerce Department to look into what is going on behind the scenes on these applications.

She said the government needs to be able to make educated decisions about who’s using Americans’ data and what that data is being used for.

Hay even suggested that Congress step in and draft legislation on this kind of data collection, but Jaffer disagreed on the grounds that not only would a slow-moving government not be able to keep up with the rapid movement of technology, but legislation may impede business. He said this type of work is best left to the private sector to figure out.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending