
What happened
A federal judge in California on Thursday temporarily blocked the Pentagon from labeling Anthropic a “supply chain risk,” a designation that effectively blacklisted the AI company from U.S. government contracts. U.S. District Judge Rita Lin said the “broad punitive measures” imposed by Defense Secretary Pete Hegseth likely violated Anthropic’s due process and free speech rights.
Who said what
The ruling was a “clear victory” for Anthropic in its “bitter power struggle with the Defense Department over the use of its Claude system by the military,” The Washington Post said. During negotiations for a $200 million contract, Anthropic wanted to keep safeguards against using its AI on autonomous weapons and surveilling Americans, and the Pentagon rejected any limits imposed by a private contractor. When the dispute became public, Hegseth blacklisted Anthropic using an “obscure government-procurement statute aimed at protecting military systems from foreign sabotage,” Reuters said.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin wrote in her 43-page ruling. If the Pentagon had real national security concerns, it “could just stop using Claude.”
What next?
Lin paused her ruling for seven days to give the Pentagon a chance to appeal. The outcome of the case and a similar challenge pending in Washington, D.C., have broad “implications for AI use in war,” The New York Times said. While the Trump administration has said it would “transition away” from Anthropic’s AI, the Post said, Claude is “deeply embedded in the military’s systems” and the Pentagon “has been continuing to use it in support of its bombing campaign in Iran.”
The Pentagon had attempted to label the company a ‘supply chain risk’




