A coalition of tech giants, former military officials, and civil liberties groups are backing Anthropic in a legal battle against the US Department of War (DOW) over its designation of the AI company as a “supply chain risk.” The dispute centers on Anthropic’s refusal to grant unrestricted access to its AI chatbot, Claude, raising critical questions about government overreach, national security, and the future of AI development.
The Dispute: Access vs. Control
The conflict began when the DOW demanded full access to Claude, giving Anthropic just 48 hours to comply or face sanctions. Anthropic CEO Dario Amodei refused, drawing two clear lines: no use of the AI for mass domestic surveillance and no integration into fully autonomous weapons systems. This stance led the DOW to label Anthropic as a supply chain risk, effectively excluding the company from lucrative government contracts.
The designation allows the government to bar Anthropic from contract awards, exclude its products, and prevent prime contractors from using its technology. The move is particularly significant given Anthropic’s recent role as the only AI provider approved for use in classified military networks, including intelligence analysis for the DOW and deployment at national nuclear laboratories.
Broad Support for Anthropic’s Position
Microsoft has filed a legal brief arguing that the DOW’s actions are vague, unprecedented, and economically damaging. The filing warns of “severe economic effects” that are not in the public interest, calling for a temporary lifting of the designation.
A separate joint filing, backed by former intelligence officials like Michael Hayden (ex-CIA Director), accuses US Defence Secretary Pete Hegseth of misusing government authority for “retribution.” The group argues that targeting Anthropic creates “sudden uncertainty” that could disrupt military operations, including ongoing conflicts like the war in Iran.
Further support comes from 37 AI engineers formerly at OpenAI and Google’s DeepMind, who call the DOW’s actions an “improper and arbitrary use of power.” They warn that such actions will stifle innovation and harm the US’s competitiveness in the AI field. The Electronic Frontier Foundation and the Cato Institute filed a joint brief stating that the government’s actions violate the First Amendment.
The Core Concerns
The conflict isn’t just about one company; it reflects a broader debate about the limits of government control over emerging technologies. The case raises concerns that allowing unfettered government access to AI could lead to unchecked surveillance, automated warfare, and the erosion of civil liberties.
As one filing put it, “If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.”
The Future of AI-Government Relations
Despite the legal challenge, the DOW has announced a six-month phase-out of Claude from military operations. Anthropic’s CEO has stated the company remains open to working with the government on contracts that align with its principles.
The outcome of this legal battle will set a precedent for how the US government regulates AI, and whether it prioritizes control over innovation and ethical considerations. The broader implications are clear: if the government is able to dictate the policies of private AI companies, it could effectively control what Americans do and say.
































