Democratic and Republican Lawmakers Introduce Two Opposing AI Bills
One bill assess AI impacts and the other supports a national regulatory framework.
Naomi Jindra
WASHINGTON, Sept. 23, 2025 – Two lawmakers from opposite parties introduced competing artificial intelligence bills on Friday, underscoring a divide over how AI should be regulated in the United States.
Rep. Yvette Clarke, D-N.Y., and Rep. Michael Baumgartner, R-Wash., each filed bills addressing AI systems, but with opposing approaches. Clarke called for strong guardrails on automated decision-making in sensitive areas, while Baumgartner pushed for a national framework designed to limit regulatory barriers.
Clarke’s Bill: Algorithmic Accountability Act
Clarke’s proposal, the “Algorithmic Accountability Act of 2025,” would direct the Federal Trade Commission to require companies to conduct impact assessments of AI systems before and after they are deployed.
The bill would require companies to consult with employees, ethics teams, outside experts and advocates for affected groups when evaluating the impact of their algorithms.
Companies would have to send summary reports of those reviews to the Federal Trade Commission both before they launch a new system and every year afterward.
The FTC would then have two years after the law passes to write the detailed rules for how these reports must be done.
The FTC must also set up a public database where the company reports are posted, so consumers and researchers can see which algorithms are in use and what risks they pose.
Businesses that fail to comply could face penalties under unfair or deceptive practices rules, with state attorneys general authorized to bring lawsuits on behalf of residents.
“Americans have the same civil liberties online as they do wherever else their lives take them,” Clarke said. “But when corporations hand off final decisions to AI systems that are too often plagued by bias, the reality is that countless people will continue to face prejudice in digital spaces.”
Clarke argued that AI already shapes critical areas of daily life, including employment, housing, education, healthcare and credit. Without oversight, she said, these systems risk amplifying inequities.
Baumgartner’s Bill: National Framework and State Preemption
Baumgartner’s bill, the “American Artificial Intelligence Leadership and Uniformity Act,” would take the opposite approach. It seeks to codify President Trump’s AI strategy by creating a national AI framework and blocking states from enacting their own AI regulations for five years.
The bill would emphasize that the United States leads the world in AI because of “a thriving innovation, investment, and development environment; and a flexible, sector-specific regulatory framework.”
It would direct the president to submit an AI action plan within 30 days of enactment, in addition to annual updates.
According to Baumgarten, the plan would remove barriers to AI development at all levels of government, set goals for federal research, adoption and risk management to ensure safe and trustworthy AI in federal missions, and strengthen supply chains, national security and critical infrastructure.
It also seeks to reduce compliance burdens for small businesses while expanding access to foundation models, computing resources, datasets and technical assistance. In addition, the proposal would align federal risk management with the National Institute of Standards and Technology’s AI guidance.
Baumgartner argued that a “patchwork of divergent state AI rules” would deter investment and create unnecessary compliance burdens. His bill allows exceptions for criminal law enforcement and state procurement policies but otherwise bars states from restricting AI systems in interstate commerce during the five-year moratorium.
Member discussion