Social Media
Bipartisan Alarm Over Social Media’s Harms to Children Prompts Slew of Proposed Legislation
Bills ranged from addressing intermediary liability to limiting personal data collection.

WASHINGTON, February 20, 2023 — Senators from both sides of the aisle came together on Tuesday to condemn social media platforms’ failure to protect underage users, demonstrating bipartisan collaboration and underscoring a trend of increased government scrutiny toward tech companies.
The Judiciary Committee hearing included discussion of several bills aimed at protecting children online, such as the Kids Online Safety Act, a measure that would create a “duty of care” requirement for platforms to shield children from harmful content. KOSA gained significant bipartisan traction during the previous session of Congress but ultimately failed to pass.
The bill’s co-sponsors — Sens. Richard Blumenthal, D-Conn., and Marsha Blackburn, R-Tenn. — emphasized the urgency of congressional action, pointing to research published Feb. 13 by the Centers for Disease Control and Prevention that showed a sharp increase in youth mental health challenges, particularly among girls and LGBTQ teens.
“It’s a public health emergency egregiously and knowingly exacerbated by Big Tech, aggravated by toxic content on eating disorders, bullying, even suicide — driven by Big Tech’s black box algorithms leading children down dark rabbit holes,” Blumenthal said.
In addition to social media’s impact on mental health, several senators focused on the issue of digital child sexual exploitation. Judiciary Committee Chair Dick Durbin, D-Ill, announced that he would be circulating the draft of a bill aimed at stopping the spread of online child sex abuse material by strengthening victim protection measures and platform reporting requirements. Sen. Lindsey Graham, R-S.C., said he was working with Sen. Elizabeth Warren, D-Mass., on a bill that would create a regulatory commission with the power to shut down digital platforms that failed to implement “best business practices to protect children from sexual exploitation online.”
Graham, the top Republican on the committee, added that he and Warren “have pretty divergent opinions except here — we have to do something, and the sooner the better.”
Bipartisan collaboration was a theme throughout the discussion. “I don’t know if any or all of you realize what you witnessed today, but this Judiciary Committee crosses the political spectrum — not just from Democrats to Republicans, but from real progressives to real conservatives — and what you heard was the unanimity of purpose,” Durbin said toward the end of the hearing.
Broad agreement for repealing Section 230, but not on its replacement
Some of the proposed social media bills discussed Tuesday would directly address the question of online platform immunity for third-party content. Several senators advocated for the EARN IT Act, which would assign platforms more responsibility for finding and removing child sexual abuse material — taking “a meaningful step toward reforming this unconscionably excessive Section 230 shield to Big Tech accountability,” Blumenthal argued.
The senators and witnesses who spoke at Tuesday’s hearing were largely united against Section 230. Witness Kristen Bride — whose son died by suicide after becoming the target of anonymous cyberbullying — said that her lawsuit against the anonymous messaging apps was dismissed based on Section 230 immunity.
“I think it is just absolutely vital that we change the law to allow suits like yours to go forward,” Sen. Josh Hawley, R-Mo., told Bride. “And if that means we have to repeal all of Section 230, I’m fine with it.”
However, Sen. Sheldon Whitehouse, D-R.I., noted that the primary barrier to Section 230 reform is disagreement over what should take its place. “I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” he said.
The Supreme Court is scheduled to hear a Section 230 case — Gonzalez v. Google — on Tuesday.
Other bills aim to protect kids online through age limits, privacy measures
Beyond the bills discussed at the hearing, several senators have recently proposed legislation aimed at protecting children’s online safety from several different angles.
On Tuesday, Hawley introduced a bill that would enforce a minimum age requirement of 16 for all users of social media platforms, as well as a bill that would commission a report on social media’s effects on underage users.
The former proposal, known as the MATURE Act, would require that users upload an image of government-issued identification in order to make an account on a social media platform, which has raised concerns among digital privacy advocates about the extent of personal data collection required.
Personal data collection was the subject of a different bill introduced the same week by Sen. Mazie Hirono, D- Hawaii, alongside Durbin and Blumenthal. The proposed Clean Slate for Kids Online Act would update the Children’s Online Privacy Protection Act of 1998 by giving individuals the right to demand that internet companies delete all personal information collected about them before the age of 13.
Discussion on the matter comes against the backdrop of a number of developments over the past year and a half, including state attorneys general investigating the impact of TikTok on kids and whistleblower testimony that alleged Facebook knew about the negative mental health impact its photo sharing app Instagram had on kids but didn’t take action on it.
Social Media
Congress Grills TikTok CEO Over Risks to Youth Safety and China
House lawmakers presented a united front against TikTok as calls for a national ban gain momentum.

WASHINGTON, March 24, 2023 — TikTok CEO Shou Zi Chew faced bipartisan hostility from House lawmakers during a high-profile hearing on Thursday, struggling to alleviate concerns about the platform’s safety and security risks amid growing calls for the app to be banned from the United States altogether.
For more than five hours, members of the House Energy and Commerce Committee lobbed criticisms at TikTok, often leaving Chew little or no time to address their critiques.
“TikTok has repeatedly chosen the path for more control, more surveillance and more manipulation,” Chair Cathy McMorris Rodgers, R-Wash., told Chew at the start of the hearing. “Your platform should be banned. I expect today you’ll say anything to avoid this outcome.”
“Shou came prepared to answer questions from Congress, but, unfortunately, the day was dominated by political grandstanding,” TikTok spokesperson Brooke Oberwetter said in a statement after the hearing.
In a viral TikTok video posted Tuesday, and again in his opening statement, Chew noted that the app has over 150 million active monthly users in the United States. TikTok has also become a place where “close to 5 million American businesses — mostly small businesses — go to find new customers and to fuel their growth,” he said.
But McMorris Rodgers argued that the platform’s significant reach only “emphasizes the urgency for Congress to act.”
Lawmakers condemn TikTok’s impact on youth safety and mental health
One of the top concerns highlighted by both Republicans and Democrats was the risk TikTok poses to the wellbeing of children and teens.
“Research has found that TikTok’s addictive algorithms recommend videos to teens that create and exacerbate feelings of emotional distress, including videos promoting suicide, self-harm and eating disorders,” said Ranking Member Frank Pallone, D-N.J.
Chew emphasized TikTok’s commitment to removing explicitly harmful or violative content. The company is also working with entities such as the Boston Children’s Hospital to find models for content that might harm young viewers if shown too frequently, even if the content is not inherently negative — for example, videos of extreme fitness regimens, Chew explained.
In addition, Chew listed several safeguards that TikTok has recently implemented for underage users, such as daily default time limits and the prevention of private messaging for users under 16.
However, few lawmakers seemed interested in these measures, with some noting that they appeared to lack enforceability. Others emphasized the tangible costs of weak safety policies, pointing to multiple youth deaths linked to the app.
Rep. Gus Bilirakis, R-Fla., shared the story of a 16-year-old boy who died by suicide after being served hundreds of TikTok videos glorifying suicidal ideation, self-harm and depression — even though such content was unrelated to his search history, according to a lawsuit filed by his parents against the platform.
At the hearing, Bilirakis underscored his concern by playing a series of TikTok videos with explicit descriptions of suicide, accompanied by messages such as “death is a gift” and “Player Tip: K!ll Yourself.”
“Your company destroyed their lives,” Bilirakis told Chew, gesturing toward the teen’s parents. “Your technology is literally leading to death, Mr. Chew.”
Watch Rep. Bilirakis’ keynote address from the Big Tech & Speech Summit.
Other lawmakers noted that this death was not an isolated incident. “There are those on this committee, including myself, who believe that the Chinese Communist Party is engaged in psychological warfare through Tik Tok to deliberately influence U.S. children,” said Rep. Buddy Carter, R-Ga.
TikTok CEO emphasizes U.S. operations, denies CCP ties
Listing several viral “challenges” encouraging dangerous behaviors and substance abuse, Carter questioned why TikTok “consistently fails to identify and moderate these kinds of harmful videos” — and claimed that no such content was present on Douyin, the version of the app available in China.

Screenshot of Rep. Buddy Carter courtesy of CSPAN
Chew urged legislators to compare TikTok’s practices with those of other U.S. social media companies, rather than a version of the platform operating in an entirely different regulatory environment. “This is an industry challenge for all of us here,” he said.
Douyin heavily restricts political and controversial content in order to comply with China’s censorship regime, while the U.S. currently grants online platforms broad liability for third-party content.
In response to repeated accusations of CCP-driven censorship, particularly regarding the Chinese government’s human rights abuses against the Uyghur population, Chew maintained that related content “is available on our platform — you can go and search it.”
“We do not promote or remove content at the request of the Chinese government,” he repeatedly stated.
A TikTok search for “Uygher genocide” on Thursday morning primarily displayed videos that were critical of the Chinese government, Broadband Breakfast found. The search also returned a brief description stating that China “has committed a series of ongoing human rights abuses against Uyghers and other ethnic and religious minorities,” drawn from Wikipedia and pointing users to the U.S.-based website’s full article on the topic.
TikTok concerns bolster calls for Section 230 reform
Although much of the hearing was specifically targeted toward TikTok, some lawmakers used those concerns to bolster an ongoing Congressional push for Section 230 reform.
“Last year, a federal judge in Pennsylvania found that Section 230 protected TikTok from being held responsible for the death of a 10-year-old girl who participated in a blackout challenge,” said Rep. Bob Latta, R-Ohio. “This company is a picture-perfect example of why this committee in Congress needs to take action immediately to amend Section 230.”
In response, Chew referenced Latta’s earlier remarks about Section 230’s historical importance for online innovation and growth.
“As you pointed out, 230 has been very important for freedom of expression on the internet,” Chew said. “[Free expression] is one of the commitments we have given to this committee and our users, and I do think it’s important to preserve that. But companies should be raising the bar on safety.”
Rep. John Curtis, R-Utah., asked if TikTok’s use of algorithmic recommendations should forfeit the company’s Section 230 protections — echoing the question at the core of Gonzalez v. Google, which was argued before the Supreme Court in February.
Other inquiries were more pointed. Chew declined to answer a question from Rep. Randy Weber, R-Texas, about whether “censoring history and historical facts and current events should be protected by Section 230’s good faith requirement.”
Weber’s question seemed to incorrectly suggest that the broad immunity provided by Section 230 (c)(1) is conditioned on the “good faith” referenced in in part (c)(2)(A) of the statute.
Ranking member says ongoing data privacy initiative is unacceptable
Chew frequently pointed to TikTok’s “Project Texas” initiative as a solution to a wide range of data privacy concerns. “The bottom line is this: American data, stored on American soil, by an American company, overseen by American personnel,” he said.
All U.S. user data is now routed by default to Texas-based company Oracle, Chew added, and the company aims to delete legacy data currently stored in Virginia and Singapore by the end of the year.
Several lawmakers pointed to a Thursday Wall Street Journal article in which China’s Commerce Ministry reportedly said that a sale of TikTok would require exporting technology, and therefore would be subject to approval from the Chinese government.
When asked if Chinese government approval was required for Project Texas, Chew replied, “We do not believe so.”
But many legislators remained skeptical. “I still believe that the Beijing communist government will still control and have the ability to influence what you do, and so this idea — this ‘Project Texas’ — is simply not acceptable,” Pallone said.
Free Speech
Additional Content Moderation for Section 230 Protection Risks Reducing Speech on Platforms: Judge
People will migrate from platforms with too stringent content moderation measures.

WASHINGTON, March 13, 2023 – Requiring companies to moderate more content as a condition of Section 230 legal liability protections runs the risk of alienating users from platforms and discouraging communications, argued a judge of the District of Columbia Court of Appeal last week.
“The criteria for deletion are vague and difficult to parse,” Douglas Ginsburg, a Ronald Reagan appointee, said at a Federalist Society event on Wednesday. “Some of the terms are inherently difficult to define and policing what qualifies as hate speech is often a subjective determination.”
“If content moderation became very rigorous, it is obvious that users would depart from platforms that wouldn’t run their stuff,” Ginsburg added. “And they will try to find more platforms out there that will give them a voice. So, we’ll have more fragmentation and even less communication.”
Ginsburg noted that the large technology platforms already moderate a massive amount of content, adding additional moderation would be fairly challenging.
“Twitter, YouTube and Facebook remove millions of posts and videos based on those criteria alone,” Ginsburg noted. “YouTube gets 500 hours of video uploaded every minute, 3000 minutes of video coming online every minute. So the task of moderating this is obviously very challenging.”
John Samples, a member of Meta’s Oversight Board – which provides direction for the company on content – suggested Thursday that out-of-court dispute institutions for content moderation may become the preferred method of settlement.
The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples.
“It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world, referring to the European Union’s Digital Services Act that went into effect in November of 2022, which requires platforms to remove illegal content and ensure that users can contest removal of their content.
Section 230
Section 230 Shuts Down Conversation on First Amendment, Panel Hears
The law prevents discussion on how the first amendment should be applied in a new age of technology, says expert.

WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.
Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms.
We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products.

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)
All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said.
Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation.
Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued.
Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said.
Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users.
It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said.
To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.
It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman.
This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.
To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!
-
Broadband Roundup3 weeks ago
AT&T Floats BEAD in USF Areas, Counties Concerned About FCC Map, Alabama’s $25M for Broadband
-
Big Tech3 weeks ago
Preview the Start of Broadband Breakfast’s Big Tech & Speech Summit
-
Big Tech4 weeks ago
House Innovation, Data, and Commerce Chairman Gus Bilirakis to Keynote Big Tech & Speech Summit
-
Big Tech3 weeks ago
Watch the Webinar of Big Tech & Speech Summit for $9 and Receive Our Breakfast Club Report
-
Infrastructure1 week ago
BEAD Build Timelines in Jeopardy if ‘Buy America’ Waivers Not Granted, White House Budget Office Told
-
#broadbandlive2 weeks ago
Broadband Breakfast on March 22, 2023 – Robocalls, STIR/SHAKEN and the Future of Voice Telephony
-
#broadbandlive3 weeks ago
Broadband Breakfast on March 8: A Status Update on Tribal Broadband
-
Infrastructure4 weeks ago
Nearly 80 Service Providers Engaged Equipment in Secure Networks Blacklist: FCC Report