Connect with us

Social Media

Oversight Board Upholds Trump’s Ban From Facebook

The Oversight Board has sent the decision back to Facebook management, criticizing it for setting a “standardless” penalty.

Published

on

May 5, 2021—The Oversight Board, which was set up to review decisions by Facebook management, has upheld the social media company’s indefinite ban on Donald Trump, and made the case only the third example of the board upholding a company decision.

The Board found Wednesday that Trump’s rhetoric “created an environment where a serious risk of violence was possible,” and noted that as President, he had an elevated level of influence that the average Facebook user would not have. Based on the size of his reach and the blatant dangerousness of his posts, the Oversight Board said it found that the ban was justified.

Even though Facebook upheld the decision, it did not condone what the board called “indeterminate and standardless penalty of indefinite suspension.” The Board considered the uncertainty of this ban to be in violation of Facebook’s terms of service, and that banned users should be provided a clear, published procedure that precisely addresses their violation.

In a post to their website’s blog, the Oversight Board condemned Facebook’s actions as shirking responsibility, “In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.”

The Board insisted that Facebook revaluate the case within six months of Wednesday’s decision.

Out of the nine cases the Oversight Board has so far considered, it has overturned six and upheld two (the Board did not arrive at a verdict for one case). Trump’s case became the third decision upheld by the Oversight Board.

Oversight Board recommendations

The Board also offered a series of policy recommendations and observations. They encouraged Facebook to not draw distinctions between politicians and other influential users. The Board stated that users with a significant following are capable of serious harm, regardless of whether they are a politician. The Board advised Facebook to strike hard and fast to enforce rules for influential users.

The Board pushed back against Facebook’s explanation for its failure to enforce its rules against Trump sooner. Facebook had argued that even though Trump may have been in violation by Facebook’s TOS, Facebook considered his content newsworthy. The Board stated the newsworthiness should not take precedent over the TOS, and that a failure to recognize this led to confusion and uncertainty.

The Board, in a blog post, also proposed nine other recommendations.

Reaction

Rep. Frank Pallone, D-New Jersey, took to Twitter to criticize Facebook, tweeting: “Donald Trump has played a big role in helping Facebook spread disinformation, but whether he’s on the platform or not, Facebook and other social media platforms with the same business model will find ways to highlight divisive content to drive advertising revenues.”

Pallone followed up by accusing Facebook of “amplifying and promoting disinformation and misinformation,” and stated that the Oversight Board is ignoring this. He concluded by saying that Facebook will only ever be held accountable with legislation.

Trump’s exodus from social media back in January of 2021 sent ripples throughout the tech world. Facebook was not the only company to deplatform the then-sitting president. Facebook’s Instagram, Twitter, Reddit, Twitch, YouTube, Snapchat, and a slew of others banned either Trump himself or communities that his followers used to communicate.

Questions surrounding the “tyranny of Big Tech,” antitrust policy, and free speech were at the forefront of political discussion, particularly for Republicans.

On Tuesday, Trump’s new social-media-esque platform went live. Hosted on donaldjtrump.com and called, “From the Desk of Donald Trump,” it resembles a Twitter feed that users can interact with by sharing to Facebook or Twitter. Users may also “like” posts directly on the site.

As a child of American parents working abroad, Reporter Ben Kahn was raised as a third culture kid, growing up in five different countries, including the U.S.. He is a recent graduate of the University of Baltimore, where he majored in Policy, Politics, and International Affairs. He enjoys learning about foreign languages and cultures and can now speak poorly in more than one language.

Section 230

Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree

While some advocate for a tightening of language, other experts believe Section 230 should not be touched.

Published

on

Rep. Ken Buck, R-Colo., speaking on the floor of the House

WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

Continue Reading

Social Media

Members of Congress Request Facebook Halt ‘Instagram For Kids’ Plan Following Mental Health Research Report

Letter follows Wall Street Journal story that reports Facebook knew about mental health damage Instagram has on teens.

Published

on

WASHINGTON, September 15, 2021 – Members of Congress have sent a letter Wednesday to Facebook CEO Mark Zuckerberg urging the company to stop its plan to launch a new platform for kids, following a report by the Wall Street Journal that cites company documents that reportedly shows the company knows its platforms harm the mental health of teens.

The letter, signed by Edward Markey, D-Massachusetts, Kathy Castor, D-Florida, and Lori Trahan, D-Massachusetts, also asks Facebook to provide answers by October 6 to questions including whether the company has, and who, reviewed the mental health research as cited in the Journal report; whether the company will agree to abandon plans to launch a new platform for children or teens; and when the company will begin studying its platforms’ impact on the kids’ mental health.

The letter also demands an update on the company’s plans for new products targeting children or teens, asks for copies of internal research regarding the mental health of this demographic, and copies of any external research the company has commissioned or accessed related to this matter.

The letter cites the Journal’s September 14 story, which reports that the company has spent the past three years conducting studies into how photo-sharing app Instagram, which Facebook owns, affects millions of young users, and found that the app is “harmful for a sizable percentage of them, most notably teenage girls.” The story uses the story of a teen who had to see a therapist due to an eating disorder due to exposure to images of other users’ bodies.

The story also cites a presentation that said teens were blaming Instagram for anxiety, depression, and the desire to kill themselves.

The head of Instagram, Adam Mosseri, told the Journal that research on mental health was valuable and that Facebook was late to realizing the drawback of connecting large swatch of people, according to the story. But he added that there’s “a lot of good that comes with what we do.”

Facebook told Congress it was planning ‘Instagram for kids’

Back in March, during a congressional hearing about Big Tech’s influence, Zuckerberg said Instagram was in the planning stages of building an “Instagram for kids.” Instagram itself does not allow kids under 13 to use the app.

On April 5, Markey, Castor and Trahan penned their names on another letter to Zuckerberg, which expressed concerns about the plan. “Children are a uniquely vulnerable population online, and images of kids are highly sensitive data,” the April letter said. “Facebook has an obligation to ensure that any new platforms or projects targeting children put those users’ welfare first, and we are skeptical that Facebook is prepared to fulfil this obligation.”

The plan was also met with opposition from the Campaign for a Commercial-Free Childhood, the Center for Humane Technology, Common Sense Media, and the Center for Digital Democracy, who said the app “preys on their fear of missing out as their ravenous desire for approval by peers exploits their developmental growth.

“The platform’s relentless focus on appearance, self-presentation, and branding presents challenges to adolescents’ privacy and well-being,” the opponents said. “Younger children are even less developmentally equipped to deal with these challenges, as they are learning to navigate social interactions, friendships, and their inner sense of strengths during this crucial window of development.”

At the March hearing, Zuckerberg, however, claimed that social apps to connect other people can have positive mental health benefits.

And then in August, Sens. Richard Blumenthal, D-Connecticut, and Marsha Blackburn, R-Tennessee, sent a letter to Zuckerberg asking for their research on mental health. Facebook responded without the company’s research, but said there are challenges with doing such research, the Journal said. “We are not aware of a consensus among studies or experts about how much screen time is ‘too much,’” according to the Journal, citing the response letter to the senators.

Continue Reading

China

Experts Raise Alarm About China’s App Data Aggregation Potential

The Communist government has access to a vast trove from Chinese-made apps.

Published

on

Former Commerce aide and professor at Texas A&M University, Margaret Peterlin

WASHINGTON, September 2, 2021 – Social media app TikTok’s rise as one of the world’s top downloaded software is concerning experts who say the aggregate data collected across a number of Chinese-made apps will allow the Communist government to get ahead of any federal action to stem the data flow.

In June, President Joe Biden signed an executive order that revoked a Trump direction that sought to ban TikTok and replaced it with criteria for the Commerce Department to evaluate the risks of said apps connected to foreign adversaries. The Trump administration even pressured TikTok to sell its U.S. business, but that never materialized.

On a webinar hosted by the Federalist Society on Thursday, panelists said the U.S. government may already be behind on the race to contain data collection and prevent it from getting into Chinese hands, who are creating advanced artificial intelligence using the data.

Margaret Peterlin, a lawyer, former Commerce Department aide and professor at the school of public service at Texas A&M University, said her concern with Biden’s executive order is whether it’s “strategically responsive” to what the Chinese government intends to do with all these sources of data – WeChat, TikTok, AliExpress, and its massive influence in telecommunications with Huawei and ZTE.

She noted that the Communist government has been very clear about its direction – that it wants to dominate and develop its data aggregation prowess to develop advanced artificial technologies. She illustrated this by using the example of how government uses advanced identification technologies and surveillance to monitor the Uyghur minority.

Peterlin also raised the issue of Chinese telecommunications companies like Huawei and ZTE, which have been the subject of restrictions from the Biden administration and the Federal Communications Commission in recent months. But she noted that Huawei is still in involved with regional carriers in the United States.

The FCC has addressed this concern by offering to compensate carriers to “rip and replace” that risky equipment. (Part of Huawei’s attraction is it’s relative low cost compared to its European rivals, for example.)

She noted that 5G “isn’t just another G” because there are many more connection and data points. Due to the promised lower latency, critical infrastructure like power grids and dams and even medical devices, can be controlled over the next-generation networks. Peterlin said these points of connection cannot be something the Chinese can get access to.

For Jamil Jaffer, founder and executive director of the National Security Institute, his concern is the pace at which the Chinese government is moving. “I worry that it’s very late in the process to be getting to this point, and they’re well ahead of us,” he said, speaking on China’s growing influence in the technology space.

Jennifer Hay, senior director for national security programs at DataRobot, a company that develops AI software, said Biden’s executive order should be able to expand to other platforms and empower the Commerce Department to look into what is going on behind the scenes on these applications.

She said the government needs to be able to make educated decisions about who’s using Americans’ data and what that data is being used for.

Hay even suggested that Congress step in and draft legislation on this kind of data collection, but Jaffer disagreed on the grounds that not only would a slow-moving government not be able to keep up with the rapid movement of technology, but legislation may impede business. He said this type of work is best left to the private sector to figure out.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending