In a dramatic legal development with significant implications for artificial intelligence regulation, newly filed court documents reveal the Pentagon told Anthropic the two sides were “very close” to resolving their differences on AI security issues—directly contradicting the government’s public characterization of the AI company as an “unacceptable risk to national security.” The sworn declarations, filed in California federal court on March 20, 2026, expose substantial discrepancies between private negotiations and public statements in this high-stakes dispute over military use of AI technology.
Anthropic Challenges Pentagon’s National Security Claims
Anthropic submitted two detailed sworn declarations to the U.S. District Court for the Northern District of California late Friday, March 20, 2026. These documents directly challenge the Department of Defense’s assertion that the AI company poses unacceptable security risks. The filings come ahead of a crucial hearing scheduled for Tuesday, March 24, 2026, before Judge Rita Lin in San Francisco.
The dispute originated in late February 2026 when President Trump and Defense Secretary Pete Hegseth publicly announced they were severing ties with Anthropic. The government cited the company’s refusal to allow unrestricted military use of its Claude AI technology as the primary reason. However, the new court documents reveal a more complex negotiation history than previously disclosed.
Key Declarants and Their Credentials
Sarah Heck, Anthropic’s Head of Policy and a former National Security Council official under the Obama administration, submitted the first declaration. Heck personally attended the February 24, 2026 meeting between Anthropic CEO Dario Amodei and Pentagon leadership. Her declaration systematically addresses what she describes as “central falsehoods” in the government’s legal arguments.
Thiyagu Ramasamy, Anthropic’s Head of Public Sector and former Amazon Web Services executive, provided the second declaration. Ramasamy brings six years of experience managing AI deployments for government customers, including classified environments. He built the team responsible for implementing Anthropic’s $200 million Pentagon contract announced in summer 2025.
Contradictory Communications Revealed
The most striking revelation in Heck’s declaration involves an email exchange that occurred on March 4, 2026—just one day after the Pentagon formally finalized its supply-chain risk designation against Anthropic. In this email, Under Secretary Emil Michael told CEO Dario Amodei that the two sides were “very close” on resolving the exact issues the government now cites as evidence of national security threats.
Specifically, Michael referenced alignment on two critical matters:
- Autonomous weapons systems policies
- Mass surveillance of American citizens
This private communication stands in stark contrast to public statements made just days later. On March 6, 2026, Michael posted on X that “there is no active Department of War negotiation with Anthropic.” One week after that, he told CNBC there was “no chance” of renewed talks between the parties.
| Date | Event |
|---|---|
| February 24, 2026 | Meeting between Anthropic CEO and Pentagon leadership |
| Late February 2026 | Public announcement of severed ties |
| March 3, 2026 | Pentagon finalizes supply-chain risk designation |
| March 4, 2026 | Michael emails Amodei about being “very close” |
| March 6, 2026 | Michael states no active negotiations on X |
| March 13, 2026 | Michael tells CNBC “no chance” of renewed talks |
| March 20, 2026 | Anthropic files sworn declarations in court |
| March 24, 2026 | Scheduled hearing before Judge Rita Lin |
Technical and Security Disputes
Ramasamy’s declaration addresses the government’s technical claims about potential security vulnerabilities. He explains that once Claude AI models deploy inside government-secured, air-gapped systems operated by third-party contractors, Anthropic maintains no operational access. The company cannot remotely disable technology, push unauthorized updates, or monitor user interactions.
Key technical points from Ramasamy’s declaration include:
- No remote kill switches or backdoors exist in deployed systems
- Model changes require explicit Pentagon approval and manual installation
- Anthropic cannot view or extract government user data
- All personnel underwent U.S. government security clearance vetting
Ramasamy further notes that Anthropic represents the only AI company where cleared personnel actually built the AI models designed specifically for classified environments. This distinction, he argues, demonstrates the company’s commitment to security protocols exceeding industry standards.
Legal Arguments and Constitutional Questions
Anthropic’s lawsuit centers on constitutional questions about government retaliation for protected speech. The company argues that the supply-chain risk designation—the first ever applied to an American company—amounts to punishment for Anthropic’s publicly stated views on AI safety, violating First Amendment protections.
The government’s 40-page filing earlier this week rejected this framing entirely. Defense Department lawyers characterized Anthropic’s position as a business decision rather than protected speech. They maintained that the designation resulted from straightforward national security considerations, not retaliation for the company’s ethical stances.
Parallel Legal Proceedings
Separately on March 20, 2026, a federal judge tentatively ruled that Reddit’s lawsuit against Anthropic should return to state court. Reddit originally filed this case in June 2025, accusing Anthropic of scraping its content without permission to train AI models. A hearing to finalize this jurisdictional decision also occurs on March 24, 2026.
Industry Context and Precedent
This case establishes important precedents for government-AI company relationships. The supply-chain risk designation mechanism, typically reserved for foreign entities, now applies for the first time to a domestic technology company. This expansion raises questions about appropriate regulatory boundaries between national security concerns and domestic innovation.
Furthermore, the dispute highlights growing tensions between AI developers’ ethical frameworks and government operational requirements. As AI systems become increasingly sophisticated, these conflicts will likely intensify across multiple sectors including defense, intelligence, and law enforcement.
Conclusion
The newly revealed court documents fundamentally alter the narrative surrounding the Anthropic-Pentagon dispute. Contradictory communications between private negotiations and public statements suggest more complex motivations behind the national security designation than initially presented. As the March 24, 2026 hearing approaches, these revelations will likely influence judicial consideration of whether the government’s actions constitute legitimate security measures or improper retaliation for protected speech. The outcome will establish significant precedents for how the United States government regulates and partners with domestic AI companies on national security matters.
FAQs
Q1: What is the core dispute between Anthropic and the Pentagon?
The dispute centers on whether Anthropic’s refusal to allow unrestricted military use of its Claude AI technology constitutes a legitimate national security concern or whether the government’s subsequent supply-chain risk designation represents retaliation for the company’s protected speech on AI ethics.
Q2: What evidence does Anthropic present to challenge the Pentagon’s claims?
Anthropic presents sworn declarations from senior executives, including email evidence showing Pentagon officials stating the sides were “very close” to agreement on key issues just days after the security designation. Technical declarations also challenge the feasibility of the government’s claimed security risks.
Q3: What is a supply-chain risk designation?
A supply-chain risk designation is a government determination that a company poses security risks to national security supply chains. Previously applied only to foreign entities, this marks the first application to an American company, potentially restricting government contracting opportunities.
Q4: When is the next court hearing?
The next hearing is scheduled for Tuesday, March 24, 2026, before Judge Rita Lin in the U.S. District Court for the Northern District of California in San Francisco.
Q5: How does this case affect other AI companies?
This case establishes important precedents for how the U.S. government regulates domestic AI companies and balances national security concerns with innovation and free speech protections. The outcome could influence future government-AI partnerships across defense and intelligence sectors.
Updated insights and analysis added for better clarity.
This article was produced with AI assistance and reviewed by our editorial team for accuracy and quality.
