Google has granted the U.S. Department of Defense access to its artificial intelligence for classified networks, effectively allowing all lawful uses of the technology, according to multiple reports. The deal, reported by The Wall Street Journal and other outlets, comes just weeks after Anthropic publicly refused to provide the Pentagon with similar terms, leading to a legal battle and a rare government designation against the AI maker.
Anthropic’s stand and the Pentagon’s response
Anthropic, the AI company founded by former OpenAI employees, declined to grant the Department of Defense unrestricted access to its models. The company sought binding guardrails to prevent its technology from being used for domestic mass surveillance and autonomous weapons. In response, the Pentagon labeled Anthropic a “supply-chain risk” — a designation typically reserved for foreign adversaries — and the two parties are now locked in a lawsuit. A federal judge last month granted Anthropic an injunction against that designation while the case proceeds.
Also read: Dessn raises $6M to build a design tool that works directly on your codebase
Google, OpenAI, and xAI step in
Google is the third major AI company to sign a deal with the Pentagon after Anthropic’s refusal. OpenAI and xAI, Elon Musk’s AI venture, had already inked similar agreements. Google’s contract includes language stating that the company does not intend for its AI to be used in domestic mass surveillance or autonomous weapons, according to the WSJ. However, the report notes that it remains unclear whether those provisions are legally binding or enforceable.
Employee backlash
The deal has drawn internal opposition at Google. More than 950 employees have signed an open letter urging the company to follow Anthropic’s lead and refuse to sell AI to the Defense Department without enforceable guardrails. Google has not responded to requests for comment on the letter or the contract terms.
Also read: How a $500M AI voice startup won Amazon Ring by taming the 'indeterminate beast' of language models
Why this matters
The Pentagon’s push for unrestricted AI access reflects a broader trend of the U.S. military seeking to integrate advanced artificial intelligence into its operations. The contracts also highlight a growing divide among AI companies: some are willing to accept government terms with limited oversight, while others, like Anthropic, are demanding clearer ethical boundaries. The outcome of the Anthropic lawsuit could set a legal precedent for how AI companies negotiate with the federal government, especially on sensitive issues like surveillance and autonomous weapons.
Conclusion
Google’s decision to expand Pentagon access to its AI, despite internal opposition and Anthropic’s public stand, underscores the complex ethical and commercial pressures facing the tech industry. As the legal battle between Anthropic and the Pentagon unfolds, the contracts signed by Google, OpenAI, and xAI may shape the future of military AI deployment for years to come.
FAQs
Q1: Why did Anthropic refuse the Pentagon’s request?
Anthropic wanted binding guardrails to prevent its AI from being used for domestic mass surveillance and autonomous weapons. The Pentagon refused those terms, leading to a legal dispute.
Q2: What does Google’s contract with the Pentagon include?
Google’s agreement includes language stating it does not intend its AI for mass surveillance or autonomous weapons, but the enforceability of those provisions is unclear, according to the Wall Street Journal.
Q3: How have Google employees reacted?
Over 950 Google employees signed an open letter asking the company to refuse the Pentagon deal without stronger ethical safeguards, similar to Anthropic’s position.

Be the first to comment