President Trump to Sign Executive Order on AI at 4 p.m.
BREAKING: White House action could have dramatic impact on the Broadband Equity, Access and Deployment program
Akul Saxena
WASHINGTON, Nov. 24, 2025 – President Trump is expected to sign an executive order on artificial intelligence at 4 p.m. Monday, Broadband Breakfast has confirmed.
If the executive order to be signed is the same as a draft order floated last Wednesday, the order could have a dramatic impact upon “non-deployment funds” under the Broadband Equity, Access and Deployment program.
The draft order floated Wednesday, Nov. 19, would direct National Telecommunications and Information Administration Administrator Arielle Roth to issue a policy notice within 90 days “specifying the conditions under which states may be eligible” for non-deployment funding under the $42.45 billion BEAD program.
The Nov. 19 draft, entitled “Eliminating State Law Obstruction of National AI Policy” would also create a Department of Justice litigation task force to challenge state AI statutes on interstate commerce and state authority grounds. It also directed the Commerce Department to flag state AI rules deemed “onerous” and notify those states they would lose access to non-deployment funding under the BEAD program, a part of the bipartisan Infrastructure Investment and Jobs Act of 2021.
Under the draft executive order, the litigation task force is also tasked with consulting with David Sacks, the special advisor for AI and crypto, the head of the Office of Science and Technology Policy, and others.
Takes aim at Democratic-run states
The draft singled out California’s SB 53, which requires developers of advanced models to report “catastrophic risks” to state regulators, and Colorado’s SB 24-205, which bars algorithmic discrimination in employment, housing and credit markets.
There has been some speculation that the White House had placed the draft “on hold” after pushback from governors, attorneys general and lawmakers in both parties.
The specific contents of today's executive order on artificial intelligence remains unclear.
In July, the Senate voted 99 to 1 to reject a budget provision that would have imposed a ten-year moratorium on state AI regulation and linked BEAD funding to compliance.
Democrats and some Republicans critical
A consumer advocacy group and Federal Communications Commissioner Anna Gomez on Thursday criticized the draft order.
Members of Congress raised similar concerns. Sen. Amy Klobuchar, D-Minn., said Thursday the draft was unlawful and would attack states for enacting AI guardrails that “protect consumers, children and creators, including by threatening broadband access in rural communities.”
Rep. Marjorie Taylor Greene, R-Ga., who on Friday announced her retirement from Congress in January, wrote the same day that states must retain the authority to regulate artificial intelligence and that federalism needed to be preserved. California State Sen. Thomas Umberg, D-Santa Ana, said: "Trump's executive order isn't bold federal action, it's federal abandonment of our children's safety."
Civil liberties groups also concerned
Civil liberties groups issued further warnings. The Electronic Frontier Foundation, a San Francisco-based digital rights organization, said the directive would strengthen industry influence and weaken state protections around automated decision-making systems.
“Stopping states from acting on AI will stop progress,” EFF wrote, adding that “Companies that produce AI and automated decision-making software have spent millions in state capitals and in Congress to slow or roll back legal protections regulating artificial intelligence.”
What state-level regulations exist and why
Four states — Colorado, California, Utah and Texas — have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.
Those laws include limiting the collection of certain personal information and requiring more transparency from companies.
The laws are in response to AI that already pervades everyday life. The technology helps make consequential decisions for Americans, including who gets a job interview, an apartment lease, a home loan and even certain medical care. But research has shown that it can make mistakes in those decisions, including by prioritizing a particular gender or race.
“It’s not a matter of AI makes mistakes and humans never do,” said Calli Schroeder, director of the AI & Human Rights Program at the public interest group EPIC.
“With a human, I can say, ‘Hey, explain, how did you come to that conclusion, what factors did you consider?’” she continued. “With an AI, I can’t ask any of that, and I can’t find that out. And frankly, half the time the programmers of the AI couldn’t answer that question."
States' more ambitious AI regulation proposals require private companies to provide transparency and assess the possible risks of discrimination from their AI programs.
Beyond those more sweeping rules, many states have regulated parts of AI: barring the use of deepfakes in elections and to create nonconsensual porn, for example, or putting rules in place around the government's own use of AI.
What Trump and some Republicans want to do
The draft executive order would direct federal agencies to identify burdensome state AI regulations and pressure states to not enact them, including by withholding federal funding or challenging the state laws in court.
It would also begin a process to develop a lighter-touch regulatory framework for the whole country that would override state AI laws.
Trump's argument is that the patchwork of regulations across 50 states impedes AI companies' growth, and allows China to catch up to the U.S. in the AI race. The president has also said state regulations are producing “Woke AI.”
The draft executive order that was leaked could change and should not be taken as final, said a senior Trump administration official who requested anonymity to describe internal White House discussions.
Why attempts at federal regulation have failed
Some Republicans in Congress have previously tried and failed to ban states from regulating AI.
Part of the challenge is that opposition is coming from their party's own ranks.
Florida's Republican governor, Ron DeSantis, said a federal law barring state regulation of AI was “Not acceptable” in a post on X this week.
Prior to the release of the draft on Nov. 19, on Nov. 18, Republican Florida Gov. Ron DeSantis called a proposed National Defense Authorization Act amendment containing similar language "a subsidy to Big Tech" and would stop states from protecting against a list of things, including “predatory applications that target children” and “online censorship of political speech.”
A federal ban on states regulating AI is also unpopular, said Cody Venzke, senior policy council at the ACLU’s National Political Advocacy Department.
“The American people do not want AI to be discriminatory, to be unsafe, to be hallucinatory,” he said. “So I don’t think anyone is interested in winning the AI race if it means AI that is not trustworthy.”
Associated Press Writer Jesse Bedayn contributed to background for this article.
Member discussion