Panel Weighs New Guardrails for AI Chatbots as Concerns Mount Over Child Safety
Experts debated AI chatbot risks to children, with advocates warning of manipulation techniques while skeptics urged perspective on technology fears.
Broadband Breakfast
WASHINGTON, Nov. 26, 2025 — Federal lawmakers are advancing legislation to regulate artificial intelligence chatbots amid growing concerns about their impact on children, with proposals ranging from educational guidance to age restrictions and disclosure requirements.
Rep. Erin Houchin, R-Ind., outlined three bipartisan bills during a Broadband Breakfast webcast Wednesday, including measures that would require AI systems to disclose when users are interacting with non-human entities and prevent chatbots from impersonating licensed professionals.
"No chatbot should act like a therapist or a doctor or an adviser, particularly to a minor," Houchin said during the event, which focused on AI's implications for youth safety.
Broadband BreakfastRepresentative Erin Houchin
The Congresswoman emphasized personal experience driving her legislative approach. She recounted discovering one of her children had accessed a social media platform at age 13, and the platform refused to delete the account, citing the current legal standard set by the 1996 Children's Online Privacy Protection Act.
"Times have dramatically changed and we know the dangers that are there," Houchin said. "So in our view, we shouldn't stick to a 13 year old standard."
Her proposed Reset Act would establish a national standard prohibiting social media platforms from allowing accounts for users under 16. The House Energy and Commerce Committee plans to hold a hearing on 19 child protection bills when Congress returns from Thanksgiving recess.
Panel outlines risks and developmental vulnerabilities
Experts on the panel outlined specific developmental vulnerabilities that make children susceptible to AI-related harms.
Dr. Jenny Radesky, associate professor of pediatrics at the University of Michigan Medical School, explained that young people "often think that anthropomorphized technologies are alive" and are "more likely to overtrust in the information that they get from technology."
"Teenagers have much more intense emotions as they figure out who they are and what matters to them," Radesky said, noting that AI systems designed to optimize engagement rather than build specific skills pose particular risks.
Amina Fazlullah of Common Sense Media reported that research shows three in four teens are using some form of AI companion, with about 30 percent preferring to speak with AI as much or more than real humans. She warned that AI companions employ techniques similar to those used by child predators, including "mirroring, sycophancy, love bombing" and "encouraging secrecy and isolation."
"Only about 37 percent of parents know their teens are using AI," Fazlullah said.
However, Corbin Barthold, internet policy counsel at TechFreedom, urged perspective, arguing that society repeatedly overreacts to new technologies that children adopt.
"Don't freak out. The kids are gonna be okay," Barhtold said. He acknowledged that companies should address problems like safeguards breaking down during extended conversations but cautioned against fear-based policymaking.
"It's easy to tell a mechanistic story as to why something is harmful," Barthold said, comparing current AI concerns to past panics over video games and social media.
The panelists agreed on the need for federal data privacy legislation, with Radesky noting concerns about "personalization around your psychological vulnerabilities" based on how users interact with AI systems.

Member discussion