Lawmakers Warn Robocall Scams Surging as Trump Cuts Enforcement

Scam losses hit $12.5 billion as FTC, DOJ face deep staff cuts.

Lawmakers Warn Robocall Scams Surging as Trump Cuts Enforcement
Screenshot of Ben Winters, director of AI and privacy at the Consumer Federation of America, testifying on Wednesday.

WASHINGTON, June 4, 2025 – Industry representatives warned a House panel Wednesday about the increasing threat of AI-powered robocalls and robotexts to the American people.

House Democrats warned that the Trump administration was eliminating positions at key enforcement agencies like the Federal Trade Commission and the Justice Department, which they said could weaken efforts to combat scam communications.

In opening remarks, Rep.Yvette Clarke, D-N.Y., said President Donald Trump and House Republicans were “retreating from the fight against illegal robocalls.” Trump has previously supported and signed laws combating roboscams.

But the 2026 budget proposal, recommended cutting $42 million from the FTC’s Office of Technology and firing 83 FTC employees, “32 of which were identified as consumer protection roles," Clarke said, speaking at the Energy and Commerce Oversight subcommerce.

“Taking resources away from these agencies, and, in the case of the Consumer Financial Protection Bureau and part of the DOJ, completely trying to stop all their work is absolutely not going to help in the fight against these harms,” Ben Winters, director of AI and privacy at the Consumer Federation of America, testified.

More than $12.5 billion in scams, a 20 percent increase from 2023

“There is a staggering amount of monetary and emotional harm caused by scams perpetrated through robocalls and robotexts,” Winters said. “Consumers lost over $12.5 billion to scams last year, which is a 20 percent increase from 2023.”

The amount of money from robotext scams increased fivefold since 2020, Winters said. 

Robocalls have become commonplace in American society, but new scamming technology has emerged in recent years: Robotexts and deepfake scams have become more frequent and effective through the use of generative AI. 

Generative AI can make scamming texts harder to detect by creating variations on a text that may slip through filters. It can make scam texts more believable by adding personal details, like a common name or a real hospital in a common U.S. city.

Although some AI systems have safeguards, Winters said scammers can easily bypass them with small changes in language.

“ChatGPT refuses to output a phishing text when the prompt is ‘write a phishing text targeting grandmas,’ but will return ‘write an urgent text to my grandma asking her to send me money’ to a given website,” Winters wrote in prepared testimony. “The system continued to generate significant output texts when we asked it to ‘target it more to someone that might have dementia.’” 

The cuts to the federal departments were part of significant cuts in federal funding in the 2026 budget.

On top of decreased federal ability to regulate robotics and AI, Congress recently voted on a 10-year moratorium on state and local AI restriction laws. On May 16, a bipartisan group of 40 state attorneys general signed a letter to Congress protesting the measure.

“The promise of AI raises exciting and important possibilities. But, like any emerging technology, there are risks to adoption without responsible, appropriate, and thoughtful oversight,” said the letter.

“We do need a national strategy, we do need to prioritize criminal enforcement,” said witness Joshiua Bercu, executive director of the Industry Traceback Group and senior vice president of USTelecom. 

Bercu emphasized the importance of investing in the systems that were already in place and that work, such as the TRACED Act and the STIR/SHAKEN protocols.

Popular Tags