Connect with us

Social Media

Social Media Companies Noncommittal on Bipartisan Calls for Changes to Content Regulation

Platform representatives did not commit to legislation that would increase online protections for kids.

Published

on

Sen. Richard Blumenthal, D-Connecticut

WASHINGTON, October 28, 2021 – Members of the Senate Commerce Subcommittee on Consumer Protection on Tuesday lobbed concerns at representatives from Snapchat, TikTok and YouTube about what their platforms put in front of kids, as the platforms did not commit to changes proposed by lawmakers who are winding down a month that included revelations of the negative impact social media can have on the mental health of kids.

During the hearing, subcommittee chairman Sen. Richard Blumenthal, D-Connecticut, said his staff had created a TikTok account and while at first they were shown videos of dance trends that have been popularized on the app, it only took one week for the app’s algorithm to place videos encouraging suicidal ideation on their feed. Blumenthal also noted that through viewing fitness-related videos geared toward a male audience on social media, it only took one minute to find posts promoting illegal steroids.

Blumenthal also raised other concerning videos his staff found, including a stunt whereby kids are encouraged to hold their breath until they lose consciousness.

In response, Michael Beckerman, TikTok’s head of public policy, stated that TikTok has “not been able to find any evidence of a blackout challenge on TikTok at all.” In response to Beckerman, Blumenthal said that his office had been able to find “pass out videos” and that he found Beckerman’s statements on the matter to be unreliable.

Tuesday’s hearing comes mere weeks after a Facebook whistleblower testified that the company does not take action on its own internal research that shows its photo-sharing app Instagram has a negative impact on kids health because it conflicts with its profit-driven motion. The testimony came after the whistleblower, Frances Haugen, leaked the research to the Wall Street Journal and the Securities and Exchange Commission. Since then, Facebook has halted development of an Instagram app for kids.

The hearing pressed tech platform representatives on social media policies that lawmakers say have led to the sale of illegal drugs to minors online, the exposure of minors to content which promotes self harm and access to children for sexual predators.

Senators also criticized the social media platforms’ lack of data privacy policies and contended that they often refuse to cooperate with law enforcement investigations as well as display indifference toward keeping children from using their platforms. Both Snapchat and TikTok’s representatives committed to providing access to the algorithms used in their apps after Senators asked whether they would.

However, the representatives would not all commit their companies to supporting proposed regulatory legislation such as the Children and Teen’s Online Privacy Protection Act written by subcommittee member Sen. Ed Markey, D-Massachusetts, which prohibits the collection of personal information without consent for kids ages 13 to 15 years., bans targeted advertising directed to kids, and lets kids and teens erase any personal info collected on them at any point with an erase button

The representatives also did not commit to supporting the EARN IT Act of 2020, which would amend Section 230 and allow social media platforms to be held liable in cases where they are suspected to have caused harm to children. Throughout the hearing, the social media representatives tended to emphasize the importance of trying to take an active role in controlling what their children are viewing on social media.

Reporter T.J. York received his degree in political science from the University of Southern California. He has experience working for elected officials and in campaign research. He is interested in the effects of politics in the tech sector.

Section 230

Experts Warn Against Total Repeal of Section 230

Panelists note shifting definition of offensive content.

Published

on

WASHINGTON, November 22, 2021 – Communications experts say action by Congress to essentially gut Section 230 would not truly solve any problems with social media.

Experts emphasized that it is not possible for platforms to remove from their site all content that people may believe to be dangerous. They argue that Section 230 of the Communications Decency Act, which shields platforms from legal liability with respect to what their users post, is necessary in at least some capacity.

During discussion between these experts at Broadband Breakfast’s Live Online Event on Wednesday, Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, who used to work as a content moderator, said that to a certain extent it is impossible for platforms to moderate speech that is “dangerous” because every person has differing opinions about what speech they consider to be dangerous. He says it is this ambiguity that Section 230 protects companies from.

Still, Feerst believes that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible. He believes that the effects of artificial intelligence’s use by platforms makes some degree of liability even more essential.

Particularly with the amount of online speech to be reviewed by moderators in the internet age, Feerst says the clear-cut moderation standards are too messy and expensive to be viable options.

Matt Gerst, vice president for legal and policy affairs at the Internet Association, and Shane Tews, nonresident senior fellow at the American Enterprise Institute, also say that while content moderation is complex, it is necessary. Scott McCollough, attorney at McCollough Law Firm, says large social media companies like Facebook are not the causes of all the problems with social media that are in the national spotlight right now, but rather that social features of today’s society, such as the extreme prevalence of conflict, are to blame for this focus on social media.

Proposals for change

Rick Lane, CEO of Iggy Ventures, proposes that reform of Section 230 should include a requirement for social media platforms to make very clear what content is and is not allowed on their sites. McCullough echoed this concern, saying that many moderation actions platforms take presently do not seem to be consistent with those platforms’ stated terms and conditions, and that individual states across the nation should be able to look at these instances on a case-by-case basis to determine whether platforms fairly apply their terms and conditions.

Feerst highlighted the nuance of this issue by saying that people’s definitions of “consistent” are naturally subjective, but agrees with McCullough that users who have content removed should be notified of such, as well as the reasoning for moderators’ action.

Lane also believes that rightfully included in the product of Section 230 reform will be a requirement for platforms to demonstrate a reasonable standard of care and moderate illegal and other extremely dangerous content on their sites. Tews generally agreed with Lane that such content moderation is complex, as she sees a separation between freedom of speech and illegal activity.

Gerst highlighted concerns from companies the Internet Association represents that government regulation coming from Section 230 reform will require widely varied platforms to standardize their operation approaches, diminishing innovation on the internet.

Continue Reading

Section 230

Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230

The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.

Published

on

Rep. Anna Eshoo, D-California

WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.

The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.

The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.

Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.

Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.

“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”

The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

From Haugen testimony to legislation

Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.

“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.

Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.

Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.

Haugen’s testimony versus Zuckerberg’s Section 230 vision

In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”

This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.

Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”

Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.

Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.

But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.

An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.

Where the parties agree

Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.

Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.

Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.

With reporting from Riley Steward

Continue Reading

Social Media

Congress Must Force Facebook to Make Internal Research Public, Whistleblower Testifies

Frances Haugen testifies in front of the Senate studying protecting kids online after revealing herself as Facebook whistleblower.

Published

on

Facebook whistleblower Frances Haugen testifies in front of Senate committee on October 5.

WASHINGTON, October 5, 2021 – The former Facebook employee who outed herself as the whistleblower who leaked documents to the Wall Street Journal that showed Facebook knew its photo-sharing app Instagram contributed to harming the mental health of kids told a Senate committee that the company’s alleged profit-driven motives means the company’s internal research cannot be kept behind closed doors.

Frances Haugen testified Tuesday in front of the Senate Subcommittee on Consumer Protection, Product Safety and Data Security, which is looking into protecting kids online, after identifying herself Sunday on the television program 60 Minutes as the person who gave the Journal and the Securities and Exchange Commission documents showing the company going forward with development of a kids version of Instagram despite knowing the mental health impact its apps have on that demographic. (Facebook has since halted development of the kids app after the Journal story and lawmakers asking for it to be suspended.)

“We should not expect Facebook to change. We need action from Congress,” Haugen said Tuesday.

That action, she recommended, includes forcing Facebook to make all future internal research fully public because the company cannot be trusted to act on its own commissioned work.

Haugen noted that the reason the company did not — and does not — take such action, which could include preemptively shutting down development of its Instagram for kids product, is because the company is allegedly driven by a profit-first model.

“Facebook repeatedly encountered conflicts between its own profits and our safety. Facebook consistently resolved those conflicts in favor of its own profits,” alleged Haugen, who now considers herself an advocate for public oversight of social media.

“The result has been a system that amplifies division, extremism, and polarization — and undermining societies around the world. In some cases, this dangerous online talk has led to actual violence that harms and even kills people. In other cases, their profit optimizing machine is generating self-harm and self-hate — especially for vulnerable groups, like teenage girls. These problems have been confirmed repeatedly by Facebook’s own internal research.”

Despite calls to modify Section 230 of the Communications Decency Act, which shields large tech platforms from legal liability for what their users post, Haugen said that – and tweaks to its outdated privacy protections – won’t be enough.

Facebook has for months touted it removes millions of groups and accounts that violate its community guidelines on hate speech and inciting violence. But Haugen alleges that despite the claims that it actively makes its platforms safer, in actuality, it only takes down three to five percent of those threats.

Asked by Senator Ben Ray Lujan, D-New Mexico, if Facebook “ever found a feature on its platform harmed its users, but the feature moved forward because it would also grow users or increase revenue,” Frances said yes, alleging the company prioritized ease of resharing over the feature’s susceptibility to growing “hate speech, misinformation or violence incitement,” even though the feature would only “decrease growth a tiny, little amount.”

She also alleged that those directions came from the head of the company himself, Mark Zuckerberg, who allegedly chose arbitrary or vague “metrics defined by Facebook, like meaningful social interactions over changes that would have significantly decreased misinformation, hate speech and other inciting content.”

Facebook’s troubles, up to this point

Facebook has already been the target of Washington’s ire for months now. It has been cited as an alleged enabler of the January 6 Capitol Hill riot that sought to stop the transition to a Joe Biden presidency, despite the platform banning former president Donald Trump. Its platform had also been blamed for allowing the spread of information that has led to violence in parts of the world, including genocide in Myanmar.

The platform has already been accused of suppressing stories from progressive news outlets and censors information that conflicts with its own personal interest, and that its algorithms deliver the same kinds of information to people so they are not exposed to different viewpoints, as a number of public interest groups have claimed.

In 2018, Facebook made worldwide news after reports in the Guardian and the New York Times found nearly 100 million Facebook profiles were harvested by a company called Cambridge Analytica, which used the data to build profiles of people to provide them with material that made them sway in a political direction.

Federal regulators have already been looking to deal with Facebook and other Big Tech companies, as that has one clear agenda item of the Biden administration. The White House has already perched Amazon critic Lina Khan as the head of the Federal Trade Commission, which has recently filed a monopoly complaint against the company in court, and other figures, including Google critic Jonathan Kanter to the Department of Justice’s antitrust division.

Facebook’s week has gone from bad to worse. Haugen, a former Facebook product manager and Harvard MBA graduate, testified in a hearing titled “Protecting Kids Online” before the Subcommittee on Consumer Protection, Product Safety, and Data Security Hearing on Tuesday. Previous opposition to Facebook’s plans to expand its products to minors has come from external parties like public interest groups and Congress.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending