Home UK News Anthropic becomes the face of AI resistance in DOD feud

Anthropic becomes the face of AI resistance in DOD feud

79

The Trump administration has long trumpeted its goal to automate its operational capacity through artificial intelligence models provided by companies like OpenAI and Elon Musk’s xAI. But as Secretary of Defense Pete Hegseth moves to offload certain human operations into the realm of the algorithm, one tech firm has emerged as a counterbalance to the White House’s vision for an artificially intelligent military: Anthropic, which “cannot in good conscience” allow Hegseth’s Pentagon to use its AI models without limitations, said CEO Dario Amodei. As the Defense Department weighs consequences, other AI firms are starting to take note — and weigh in.

Taking a ‘bold stand on ethical grounds’

Despite believing in the “existential importance” of using AI to protect the United States and “defeat our autocratic adversaries,” Anthropic has identified a “narrow set of cases” including mass domestic surveillance and “fully autonomous weapons” wherein AI can “undermine, rather than defend, democratic values,” Amodei said in a company statement. Moreover, Hegseth’s allegedly retaliatory move to blacklist Anthropic as a supply chain risk is “inherently contradictory” for labeling the company a security risk and simultaneously “essential to national security.” Hegseth’s “heaviest-handed way you can regulate a business” marks a “landmark moment” for how the Pentagon “interacts with our cutting-edge technology developed on U.S. soil” in general, said Katie Sweeten, a former Justice Department official who coordinated the relationship between DOJ and the Pentagon, at Politico.

While Amodei’s Anthropic faces a government ban, his “main rival,” OpenAI’s Sam Altman, “struck his own deal” to fill Anthropic’s DOD role, said CBS News. Reached just hours before the United States and Israel launched a joint assault on Iran, the OpenAI partnership did not prevent the military from using Anthropic’s “very same tools” that it had just banned, said The Wall Street Journal. It will likely take “months” to fully replace Anthropic’s Claude AI model with other platforms.

By “refusing to bow” to a White House intent on “bullying private companies into submission,” Amodei is “taking a bold stand on ethical grounds,” said The Atlantic. While the company’s competitors “jockey for dominance” in the field, Anthropic has “distinguished itself by emphasizing safety.” Refusing White House pressure means Anthropic “may have just averted another crisis” in the form of a “major public backlash” from those who could see the company as a “more principled player in the AI wars.” After Altman’s OpenAI replaced Anthropic at the Pentagon, the latter’s Claude app has been “rocketing to the top of the App Store,” with some users saying they were “defecting” from ChatGPT to Anthropic after feeling “uneasy about OpenAI’s ambitions,” said Business Insider.

Contract negotiation vs. congressional regulation

Anthropic is “rightly concerned” that its products could be used for “unsafe or malicious” ends, said former Air Force Secretary Frank Kendall at The New York Times. But the company is wrong for trying to use “contractual terms” to either “prevent the misuse of its products,” or at least to “deflect responsibility.” But Anthropic also has the “option” to not sell to the government at all. The government, meanwhile, “cannot be expected to negotiate provisions” like Anthropic is asking for with all its partners, which would be a “nightmare to administer and unenforceable.” What, then, is “appropriate” to address this debate? “Regulation by Congress.”

Pete Hegseth pushed the artificial intelligence developer for expansive access to its potentially lethal creation. CEO Dario Amodei isn’t apologizing for pushing back.