What Anthropic’s Clash With the Pentagon Is Really About

What Anthropic’s Clash With the Pentagon Is Really About
The weekslong conflict between Anthropic and the Department of Defense is entering a new phase. After being designated a supply-chain risk by DOD last week, which effectively forbids Pentagon contractors from using its products, the AI company filed a lawsuit against DOD this morning alleging that the government’s actions were unconstitutional and ideologically motivated. Then, this afternoon, 37 employees from OpenAI and Google DeepMind—including Google’s chief scientist, Jeff Dean—signed an amicus brief in support of Anthropic, in essence lending support to one of their employers’ greatest business rivals (even as OpenAI itself has established a controversial new contract with DOD).The standoff is unprecedented. For the past few weeks, Anthropic has been in heated negotiations with the Pentagon over how the U.S. military can use the firm’s AI systems. Anthropic CEO Dario Amodei had refused terms that would have seemingly allowed the Trump administration to use the company’s AI systems for mass domestic surveillance or to power fully autonomous weapons, leading DOD officials to accuse Amodei of “putting our nation’s safety at risk” and of having a “God-complex.”Nobody knows how this dispute will end. A spokesperson for Anthropic told me that the lawsuit “does not change our longstanding commitment to harnessing AI to protect our national security” and that the firm will “pursue every path toward resolution, including dialogue with the government.” A DOD spokesperson told me that the department does not comment on litigation.[Read: Inside Anthropic’s killer-robot dispute with the Pentagon]But a conflict like this was inevitable, and more are sure to come. The government does not have anything close to a legal framework for regulating generative AI or, for that matter, online data collection. There are few legal, externally enforced guardrails on the use of AI in autonomous weaponry, and fewer still on how AI can be used to process the huge sums of information that federal agencies can collect on people: location data, credit-card purchases, browsing-history data, and so on. Because the laws are loose, Anthropic and OpenAI have been able to set their own privacy policies and guidelines for how AI can and cannot be used, and then change them at will; OpenAI, Meta, and Google, for instance, have all reversed previous restrictions on military applications of AI. But this cuts in the other direction as well: Anthropic has effectively been branded an enemy of the state for opposing the administration’s desire to be able to use… [TheTopNews] Read More.
THE ATLANTIC – Technology | Internet & TechnologyMon, March 9, 2026
3 days ago
----- OR -----


Scroll Up