Democrat, Public Interest Groups Challenge Legality of White House AI Directive

Critics warn federal preemption efforts exceed agency authority and statutory limits.

Democrat, Public Interest Groups Challenge Legality of White House AI Directive
Photo of Nat Purser, senior policy advocate at Public Knowledge

WASHINGTON, Nov. 20, 2025 – A consumer advocacy group and Federal Communications Commissioner Anna Gomez on Thursday criticized a leaked executive order that would direct federal agencies to withhold broadband funding from states that regulate artificial intelligence.

The draft order would instruct the National Telecommunications and Information Administration to withhold Broadband Equity, Access, and Deployment program non-deployment funds from states with AI laws that the Commerce Department deemed “onerous.” The non-deployment category represents about $21 billion of the BEAD program’s $42.45 billion and covers proposals ranging workforce development to digital literacy. 

It would establish a Department of Justice task force to challenge state AI laws in federal court. The draft would also direct the Federal Communications Commission to prepare rules creating a federal AI framework that overrides state laws.

‘Dubious at best,’ says agency commissioner

Anna Gomez, Democratic commissioner at the FCC, questioned whether the agency had authority to displace state AI laws at the agency’s press conference on Thursday. 

She said the commission’s jurisdiction was “dubious at best” because its traditional authority covers communications networks, not general state regulations governing algorithmic use.

“It’s a stretch to say that, because telecom companies use AI in operating their networks, that somehow gives us jurisdiction to preempt states,” Gomez said. She asked whether the commission had authority to preempt state actions on “power” or “employment practices,” drawing a comparison to other areas where the FCC lacks statutory reach.

In addition to the FCC roles, the Federal Trade Commission would be ordered to evaluate whether state efforts to limit bias or discrimination in AI models violate the FTC Act’s ban on deceptive practices.

The draft order identified state laws it considered barriers to AI development. California’s SB 53, signed by Gov. Gavin Newsom (D) in September, required developers of advanced AI models to report catastrophic risks to state regulators. 

Colorado’s SB 24-205 prohibits algorithmic discrimination in employment, housing, and credit decisions. Colorado delayed implementation of its law until June 2026 following opposition from technology companies.

Public Knowledge criticizes draft order

Public Knowledge, in a Thursday statement, said the draft order attempted to override state authority without citing any legal basis.

“Rather than doing the hard work of establishing a federal regulatory standard for AI with Congress, the White House has invented authorities and delegated non-existent powers to the agencies and executives most anxious to do the President’s bidding,” said Nat Purser, a senior policy advocate at Public Knowledge.

Purser said Congress should establish a federal framework for AI but noted that “where Congress has failed to act, states have stepped up.”

A similar effort surfaced in July, when the Senate voted 99-1 to reject a budget provision that would have barred states from regulating AI for 10 years and tied BEAD funding to compliance. 

Sean Conway, a partner at global law firm Akin and former deputy chief counsel at NTIA, said in an email that the legality of conditioning federal broadband funds on state AI laws would not be known until NTIA ultimately structured the requirement. He said such conditions could raise “grants law, appropriations law, and even constitutional issues” depending on the “substance of the conditions.”

Other civil rights groups pushed back on the draft order, arguing that terminating state AI laws strips consumers of essential protections.

Alejandra Montoya-Bayer, vice president of the Leadership Conference’s Center for Civil Rights and Technology, cited cases in which automated systems denied “Medicare enrollees denied life-saving health care,” misidentified criminal suspects, and blocked low-income families from rental housing.

The group called the moratorium a “nonstarter,” and said the directive underscored the need for Congress to enact federal safeguards that ensure “AI truly works for the people.”

The draft order remained unsigned as of Thursday, and the White House had not indicated when it planned to finalize the directive.

Editor's note: The story has been updated to include comment from the Leadership Conference's Center for Civil Rights and Technology.

Member discussion

Popular Tags