Technology, National Security Leaders Say Anthropic and Trump’s ‘Messy Public Breakup’ Will Harm Both

Anthropic’s supply chain risk designation is contradictory since the Defense Department has used its tools in Iran military operations, said CNAS executive vice president Paul Scharre.

Technology, National Security Leaders Say Anthropic and Trump’s ‘Messy Public Breakup’ Will Harm Both
Photo of Moderator Vivek Chilukuri, CNAS Executive Vice President Paul Scharre, and Adjunct Senior Fellow at the Technology and National Security Program Jack Shanahan at CNAS panel on March 10, 2026.

WASHINGTON, March 12, 2026 – Technology and national security leaders said the “messy public breakup” between Anthropic and the Department of Defense will ultimately harm both entities. 

Paul Scharre, the executive vice president of the Center for New American Security (CNAS), said the federal government’s response to Anthropic’s refusal to let the Pentagon use its artificial intelligence is “totally inappropriate” and an act of “retribution.” 

As a part of internal U.S. military operations, Defense Secretary Pete Hegseth said he wanted to use AI company Anthropic’s chatbot. The company’s CEO Dario Amodei expressed ethical concerns about unchecked AI use within the government, including fully autonomous armed drones and mass surveillance. In late February, Hegseth issued a deadline for unrestricted military use to Anthropic, which the company denied.

As a result, President Donald Trump ordered all federal agencies to phase out the use of Anthropic technology and designated Anthropic a supply chain risk, meaning it holds potential to enable sabotage, surveillance or exploitation from foreign adversaries to the U.S. government or military.  

“To use the supply chain risk designation as an act of retribution against the U.S. company is totally unprecedented. That’s not what it’s designed for,” Scharre said. “It’s designed to ensure that we don’t have foreign technology coming into the military supply chain in a way that might create risk of espionage or sabotage.” 

Scharre noted that the supply chain risk designation has been used in the past against Chinese technology companies Huawei and ZTE. 

Within 24 hours of Anthropic’s supply chain risk designation, the U.S. launched a military campaign against Iran, where Anthropic tools were used to plan operations, Scharre said. 

“What is it? Is the technology so valuable that we need it, or such a vulnerability that we make sure that nobody uses it? And I think it’s pretty clear in this contradiction, that this is really just being used as a way to enact retribution against the company,” Scharre said. 

Despite the contradiction, Jack Shanahan, an adjunct senior fellow at the Technology and National Security Program, said shutting out “one of the best” American AI companies from future government business was an unfortunate and poor decision. Shanahan said he saw the government’s stance as final, even though he hopes to see a renegotiation between Anthropic and the White House. 

“We’re going to regret this decision someday. It’s short sighted,” Shanahan said. “... It’s not going to end well for any side.” 

Shanahan and Scharre joined Moderator Vivek Chilukuri at CNAS’s The Pentagon and Silicon Valley: The Future of AI in National Defense panel. CNAS is a nonprofit that works on building national security and defense policies. 

Member discussion

Popular Tags