Experts Debate AI Strategy and the ‘Race’ for Global Leadership

Panelists weigh risks, values, and accountability following announcement of White House AI action plan

Experts Debate AI Strategy and the ‘Race’ for Global Leadership
Photo by Alex Knight used with permission

WASHINGTON, July 24, 2025 – Panelists were divided on whether the United States should view artificial intelligence through the lens of geopolitical competition with China at a Broadband Breakfast Live Online panel on Wednesday.

“I think that the United States has turned an important corner,” said Adam Thierer, Senior Fellow at the R Street Institute, reacting to the 28-page federal plan released Wednesday by the Trump administration. He said the plan marked “a decidedly different approach to AI than we were hearing two years and two months ago.”

Unlike the “heavy-handed bureaucratic” state measures of the Biden administration, “we’ve realized that we’re in an international race against China for global AI supremacy,” he said.

Broadband Breakfast on July 23, 2025 - The Politics of Artificial Intelligence
How can we expect to see regulations on artificial intelligence evolve over time?

But Yonathan Arbel, Rose Professor of Law at the University of Alabama School of Law, disagreed with the idea of a race to AI supremacy. “I’m not sure what winning the race looks like, or how it ensures long-term ‘winning conditions,’” he said. “I don’t like this race metaphor, and I think it leads us down a very dark road where we have to win no matter what.”

Thierer countered that the contest also involved global values, racing to be the top-most influential country in AI. “It’s about whose systems, technologies, forms of speech, and everything else will dominate the globe,” he said.

Preemption of state laws also a major concern

More than 1,000 AI-related bills are pending in U.S. legislatures, including sweeping frameworks in California, Colorado, New York and Illinois, Thierer noted with concern. 

Sarah Oh Lam, senior fellow and vice president of strategic initiatives at the Technology Policy Institute, agreed. She cautioned against letting a patchwork of state laws slow progress. “Rather than being precautionary and afraid of innovation, I think the AI action plan is very forward-looking,” she said.

But Chris Chambers Goodman, a Pepperdine law professor who studies algorithmic bias, countered that public concern over unchecked deployment is growing. 

She told viewers that “safety might be more important than innovation unfettered,” pointing to discriminatory hiring algorithms and workplace surveillance.

That was not Thierer’s perspective: "Sacramento and Albany and Springfield and Denver are calling the shots for the AI marketplace in the United States." 

All four speakers agreed Congress has so far failed to craft a comprehensive federal privacy law, echoing long-running gridlock over substantive legislation by Congress. The same might well continue to be true about AI policy.

Who’s liable when AI gets it wrong?

In the absence of legislative action, Oh Lam predicted that liability battles over faulty AI tools will play out in court.

“You still have human agency,” said Oh Lam. She likened AI tools making decisions from healthcare to hiring to the use of other professional software. Human actors are still ultimately responsible for choosing and applying them.

But Arbel pushed back, noting that growing reliance on automated systems complicates that view. “The concern is that, when AI does something wrong, like discrimination, which human does that fall back on?”

These gray areas, Oh Lam said, are likely to drive future legislation. As AI systems become more autonomous, lawmakers will need to establish frameworks to assign responsibility when harm occurs.

Member discussion

Popular Tags