Connect with us

Social Media

Senator Markey Pleased with Pressure on Companies to Protect Children Online

Senator Edward Markey has been a proponent of increasing the age group in child privacy laws for years.

Published

on

Photo of Senator Edward Markey

WASHINGTON, March 3, 2022 – Senator Edward Markey, D-Mass, on Monday praised pressure from lawsuits and whistleblower testimony against companies that violate online protections for children and praised Thursday President Joe Biden’s remarks on child protection in his State of the Union address, in the senator’s latest vocal push to pass enhanced laws for online protections for children.

Current legislation, called the Child Online Privacy Protection Act, includes online protections for children under 13. Markey has been pushing to have the age threshold increased. In May last year, senators including Markey introduced bipartisan legislation – called the Children and Teens’ Online Privacy Protection Act – that would extend greater online consumer protections to minor, including making it illegal for companies to collect data from anyone 13-15 years old without their consent.

The comments came before a coalition of attorneys general from across the nation announced Wednesday that they would be investigating the impact social media platform TikTok has on children. It also came before President Biden said during his State of the Union address that companies must be held accountable for the “national experiment they’re conducting on our children for profit.”

On Thursday, Markey and Bill Cassidy, R-La., released a joint statement praising the comments by Biden and said in a Thursday letter to the Commerce Secretary Gina Raimondo that the senators are ready to work with the White House to push the CTOPPA legislation forward. “There is a direct link between the lack of online privacy protections for young people and the youth mental health crisis in this country,” the letter said.

Before that, Markey told the 2022 State of the Net conference on Monday, that he’s been pleased with actions taken by those who are challenging big technology companies, such as Facebook and Google.

Pressure mounting on social media companies

He noted testimony from Facebook whistleblower Frances Haugen, who leaked documents from inside the company to the Wall Street Journal and the Securities and Exchange Commission that showed the company’s photo-sharing app Instagram was having a negative impact on children, yet the company allegedly did not address the issues.

The fallout of the testimony and pressure from lawmakers forced Facebook to pause development of its “Instagram for Kids” product.

There have also been lawsuits in the past, such as a Federal Trade Commission and New York action in 2019 against Google and its subsidiary YouTube for collecting children’s personal data without their parent’s knowledge. The case ended with Google and YouTube paying a $170 million civil penalty.

Markey’s comments come after a bipartisan bill, brought by Senator Richard Blumenthal, D-Conn., and Senator Marsha Blackburn, D-Tenn., was introduced. The Kids Online Safety Act would require platforms to give guardians control over their child’s use of social media and will be able to block certain content and limit screen time.

Markey cited multiple statistics, including the fact that 95 percent of teens have access to a smartphone and that young people’s screen time doubled during the pandemic, to support his claim that young Americans, especially tweens and teens, need protection.

“Do we have the courage to take on this issue?” Markey asked on Monday, in reference to protecting those under the age of 16.

Reporter Ashlan Gruwell studied political science at Brigham Young University. She has immersed herself in principles of American politics and voter behavior. She also enjoys traveling internationally and hopes to visit the Nordic Region of Europe next.

Free Speech

Additional Content Moderation for Section 230 Protection Risks Reducing Speech on Platforms: Judge

People will migrate from platforms with too stringent content moderation measures.

Published

on

By

Photo of Douglas Ginsburg by Barbara Potter/Free to Choose Media

WASHINGTON, March 13, 2023 – Requiring companies to moderate more content as a condition of Section 230 legal liability protections runs the risk of alienating users from platforms and discouraging communications, argued a judge of the District of Columbia Court of Appeal last week.

“The criteria for deletion are vague and difficult to parse,” Douglas Ginsburg, a Ronald Reagan appointee, said at a Federalist Society event on Wednesday. “Some of the terms are inherently difficult to define and policing what qualifies as hate speech is often a subjective determination.”

“If content moderation became very rigorous, it is obvious that users would depart from platforms that wouldn’t run their stuff,” Ginsburg added. “And they will try to find more platforms out there that will give them a voice. So, we’ll have more fragmentation and even less communication.”

Ginsburg noted that the large technology platforms already moderate a massive amount of content, adding additional moderation would be fairly challenging.

“Twitter, YouTube and Facebook  remove millions of posts and videos based on those criteria alone,” Ginsburg noted. “YouTube gets 500 hours of video uploaded every minute, 3000 minutes of video coming online every minute. So the task of moderating this is obviously very challenging.”

John Samples, a member of Meta’s Oversight Board – which provides direction for the company on content – suggested Thursday that out-of-court dispute institutions for content moderation may become the preferred method of settlement.

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples.

“It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world, referring to the European Union’s Digital Services Act that went into effect in November of 2022, which requires platforms to remove illegal content and ensure that users can contest removal of their content.

Continue Reading

Section 230

Section 230 Shuts Down Conversation on First Amendment, Panel Hears

The law prevents discussion on how the first amendment should be applied in a new age of technology, says expert.

Published

on

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.  

Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms. 

We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products. 

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said. 

Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation. 

Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued. 

Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said. 

Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users. 

It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said. 

To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.  

It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman. 

This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.   

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Continue Reading

Free Speech

Creating Institutions for Resolving Content Moderation Disputes Out-of-Court

Private institutions may become primary method for content moderation disputes, says expert.

Published

on

Photo of John Samples, member of Meta's Oversight Board

WASHINGTON, March 9, 2023 – A member of Meta’s oversight board, John Samples, suggested that out-of-court dispute institutions for content moderation may become the preferred method of settlement in Broadband Breakfast’s Big Tech & Speech Summit Thursday. 

Meta’s oversight board was created by the company to support free speech by upholding or reversing Facebook’s content moderation decisions. It works independently of the company and hosts 40 members around the world.  

The European Union’s Digital Services Act, which came into force in November of 2022, requires platforms to remove illegal content and ensure that users can contest removal of their content. It clarifies that platforms are only liable for users’ unlawful behavior if they are aware of it and fail to remove it. 

The Act specifies illegal speech to include speech that does harm to the electoral system, hate speech, and speech that harms fundamental rights. The appeals process allows citizens to go directly to the company, the national courts, or out-of-court dispute resolution institutions, none of which currently exist in Europe. 

According to Samples, the Act opens the way for private organizations like the oversight board to play a part in moderation disputes. “Meta has a tremendous advantage here as a first mover,” said Samples, “and the model of the oversight board may well spread to Europe and perhaps other places.” 

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples. “It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world.  

The private option of self-regulation has worked well, said Samples. “It may well be expanding throughout much of the world. If it goes to Europe, it could go throughout.” 

Currently, of the media that Meta reviews for moderation, only one percent is restricted, either by taking down the content or reducing the size of the audience exposed to it, said Samples. The oversight board primarily rules against Meta’s decisions and accepts comments from independent interests.  

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending