New Court Filing Reveals Pentagon Told Anthropic The Two Sides Were Nearly Aligned — A Week After Trump Declared The Relationship Kaput

Anthropic's court filings reveal a Pentagon email saying both sides were "very close" — days after Trump cut ties with the AI company over national se
Matilda

Anthropic vs Pentagon: The Emails That Change Everything

A single email is now at the center of one of the most explosive legal battles in artificial intelligence history. Days after the Trump administration declared Anthropic a national security threat and cut ties with the company, a top Pentagon official was privately telling Anthropic's CEO the two sides were "very close" on the very issues being used to justify that designation. That contradiction is now sworn court evidence — and it raises serious questions about what this dispute is really about.

New Court Filing Reveals Pentagon Told Anthropic The Two Sides Were Nearly Aligned — A Week After Trump Declared The Relationship Kaput
Credit: Samyukta Lakshmi/Bloomberg / Getty Images

The Lawsuit Nobody Saw Coming

When President Trump and Defense Secretary Pete Hegseth publicly announced they were severing ties with Anthropic in late February 2026, the story seemed straightforward: an AI company had refused to cooperate with the military, and the government had walked away. The Pentagon applied a supply-chain risk designation to Anthropic — the first time such a label had ever been used against an American company — citing what it called an "unacceptable risk to national security." Anthropic pushed back immediately, filing a lawsuit against the Department of Defense. The company argued the designation was not a security decision at all, but government retaliation for its publicly stated views on AI safety, in violation of the First Amendment. It was a bold legal argument, and the government dismissed it entirely, calling Anthropic's refusal to allow all lawful military uses of its technology a simple business decision — not protected speech. A hearing is set for Tuesday, March 24, before Judge Rita Lin in San Francisco. And late Friday afternoon, Anthropic filed two sworn declarations that may reshape how the public — and the court — understands this story.

The Email That Raises Hard Questions

The most striking detail in Anthropic's new court filings is an email sent on March 4, 2026, by Emil Michael, the Pentagon's Under Secretary. The email, attached as a court exhibit and addressed to Anthropic CEO Dario Amodei, states that the two sides were "very close" on the two specific issues the government now holds up as proof that Anthropic is a national security threat: the company's positions on autonomous weapons and mass surveillance of American citizens. The timing is what makes this remarkable. March 4 was the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic. In other words, the official who helped execute the designation was privately telling Anthropic's leadership they were nearly in agreement on the same issues cited to justify it. What followed publicly was a very different story. On March 5, Amodei published a statement describing productive conversations with the Pentagon. On March 6, Michael posted publicly that there was no active negotiation with Anthropic. A week after that, he stated there was no chance of renewed talks. 

Anthropic's Head of Policy, Sarah Heck, filed the declaration containing this email. She stops short of directly accusing the government of using the designation as a bargaining chip — but the timeline she lays out leaves that question hanging unmistakably in the air.

Who Is Sarah Heck, and Why Does Her Voice Matter

Sarah Heck is not a typical tech policy staffer, and that distinction matters enormously here. Before joining Anthropic, she served as a National Security Council official at the White House under the Obama administration, then moved to Stripe before taking on her current role running Anthropic's government relationships and policy work. She was personally present at the February 24 meeting where Amodei sat down face-to-face with Defense Secretary Hegseth and Under Secretary Michael.

Her declaration directly challenges what she calls a central falsehood in the government's filings: the claim that Anthropic demanded some kind of approval role over military operations. She writes plainly that at no time during negotiations did she or any Anthropic employee make such a demand. She also points out that the Pentagon's concern about Anthropic potentially disabling or altering its technology mid-operation was never raised during any of the months of negotiations between the two sides.

According to Heck, that claim appeared for the first time inside the government's court filings — giving Anthropic no opportunity to respond before it became part of the legal record.

The Technical Reality Behind the Kill Switch Claim

The second declaration comes from Thiyagu Ramasamy, Anthropic's Head of Public Sector, whose background gives his testimony particular weight. Before joining Anthropic in 2025, he spent six years managing AI deployments for government customers at a major cloud provider, including in classified environments. At Anthropic, he built the team responsible for bringing its Claude AI models into national security and defense settings, including a significant Pentagon contract announced last summer.

His declaration targets the government's claim that Anthropic could theoretically interfere with military operations by disabling its technology or altering how it behaves — and he calls this technically impossible. Once Claude is deployed inside a government-secured, air-gapped system operated by a third-party contractor, Anthropic has no access to it. There is no remote kill switch. There is no backdoor. There is no mechanism to push unauthorized updates. Any so-called operational veto, he argues, is a fiction — a change to the deployed model would require the Pentagon's explicit approval and deliberate action to install. Anthropic cannot even see what government users type into the system, let alone extract any of that data. Ramasamy also addresses the government's argument that Anthropic's hiring of foreign nationals makes the company a security risk, noting that Anthropic employees have undergone the same government security clearance vetting required for access to classified information.

He states that to his knowledge, Anthropic is the only AI company in which cleared personnel actually built the AI models designed to run in classified environments.

What the Government Says — and Why It Matters

The Department of Defense filed a 40-page brief earlier this week laying out its defense. The government's core argument is that Anthropic's refusal to allow all lawful military uses of its technology was a business decision, not constitutionally protected speech, and that the supply-chain risk designation was a legitimate national security call with no connection to the company's AI safety views.

Officials maintain the designation had everything to do with Anthropic's unwillingness to cooperate fully with military requirements — and nothing to do with punishing a company for its opinions. That framing matters enormously from a legal standpoint. For Anthropic's First Amendment argument to succeed, the company must demonstrate that the government's action was motivated — at least in part — by a desire to punish it for expressing views on how AI should and should not be deployed. The government is arguing there is nothing to punish here: a vendor declined to meet a client's requirements, and the client responded accordingly. The court will have to weigh both versions of events. But the internal email now in the public record gives Anthropic a tangible, concrete exhibit to anchor its argument.

A Dispute That Could Define AI's Role in National Security

This case is bigger than one company and one contract, and its outcome could shape the future of AI in government for years to come. Anthropic's lawsuit marks the first time an American technology company has been sued over a supply-chain risk designation — a legal tool historically aimed at foreign adversaries, not domestic businesses with security-cleared staff and active Pentagon agreements. The outcome could set a lasting precedent for how AI companies navigate the tension between maintaining their own safety standards and satisfying the full range of demands that government customers bring to the table. It raises questions that have no clean answers yet: Can a company decline certain military uses of its technology without being labeled a national security threat? Can the government use security designations as leverage in commercial negotiations? And who ultimately decides where the line falls between a business disagreement and a genuine risk to the nation? The March 24 hearing will not settle all of these questions. But the sworn declarations filed Friday make one thing clear — this dispute is far more complicated, and far more politically charged, than either side initially let the public believe.

Post a Comment