The US Department of Defense has entered into classified network AI agreements with eight technology companies, authorizing deployment of their AI capabilities on military networks classified up to the Top Secret level. The companies are Amazon Web Services, Google, Microsoft, OpenAI, NVIDIA, SpaceX, Reflection AI (a NVIDIA-backed startup), and Oracle. Anthropic — maker of the Claude model family — was excluded after a dispute over usage guardrails, and was separately designated a supply chain risk by the Pentagon earlier this year.
Technical Details: Impact Level 6 and Impact Level 7
The agreements cover deployment on two classification tiers:
Impact Level 6 (IL6): Used for storage and processing of information classified up to the Secret level. This covers a broad range of military operational and intelligence data that is sensitive but below the Top Secret threshold.
Impact Level 7 (IL7): Supports highly restricted data. This is the highest DoD cloud classification tier, covering information whose unauthorized disclosure would cause exceptionally grave damage to national security.
Deploying commercial AI models at IL7 is operationally significant. Prior to these agreements, the use of frontier commercial AI on systems handling this classification level was not authorized. The agreements establish the technical and legal frameworks under which the models can operate in these environments.
The stated uses include: streamlining data synthesis, elevating situational understanding, and augmenting warfighter decision-making in complex operational environments. The agreements are explicitly for "lawful operational use," though the specific applications — intelligence analysis, logistics optimization, targeting assistance, communications, or other functions — are not detailed in the public announcement.
All companies that agreed have presumably accepted DoD terms that permit unrestricted use for lawful military purposes. The precise contractual language is not public.
The Anthropic Exclusion
The notable absence from the agreement list is Anthropic. The DoD designated Anthropic as a supply chain risk earlier in 2026 — an unusual designation typically applied to foreign-controlled or compromised vendors, not to a US AI company.
The core dispute, as reported by multiple outlets, was over usage guardrails. The Pentagon sought unrestricted use of Claude for military applications. Anthropic insisted on contractual restrictions preventing its technology from being used for domestic mass surveillance and autonomous weapons systems. The companies could not reach terms. Anthropic did not receive an IL6/IL7 agreement.
The exclusion is significant for two reasons. First, Claude models (particularly Claude Opus) are among the most capable frontier models for complex reasoning tasks — their exclusion from classified DoD networks is a meaningful operational constraint for any military use cases where Claude's specific capabilities are preferred. Second, Anthropic's position — that it would forgo a major government contract rather than remove safety guardrails — establishes a precedent in AI industry negotiations with the military.
Broader Implications
These agreements formalize a shift that has been underway in defense circles for several years: AI is moving from experimental to operational in classified military contexts. The participation of SpaceX and Reflection alongside established cloud providers (AWS, Google, Microsoft) signals that the DoD is deliberately diversifying its AI vendor base rather than consolidating on a single provider's stack.
Breaking Defense reported that the agreements position AI as infrastructure rather than as a specific application — the companies provide the capability; military operators determine the applications within the authorized use framework.
What Security Professionals Should Watch
- Monitoring and auditability on classified AI inference. Deploying AI on IL7 networks creates new attack surfaces: prompt injection attacks against AI systems with access to classified data could exfiltrate information through generated outputs. Standard AI security testing (red-teaming, adversarial prompt testing) must be adapted for classified deployment contexts.
- Supply chain integrity for model weights. Model weights are sensitive artifacts. IL6/IL7 deployments require verifiable provenance for training data and model versions — a more complex supply chain security problem than traditional software.
- Autonomous decision support vs. autonomous decision-making. The line between "augmenting warfighter decision-making" and automating targeting decisions is not defined in the public agreements. How that line is drawn will determine the actual risk profile of these deployments.
- Vendor concentration risk. Eight vendors is more diverse than one, but AWS, Google, and Microsoft together control the vast majority of US cloud infrastructure. An incident at any one provider could have classified-system implications.
Conclusion
The Pentagon's classified AI agreements accelerate the integration of commercial frontier AI into national security operations. The Anthropic exclusion — and the supply chain risk designation — represents the first clear public case of an AI company declining a major government contract on safety grounds, a dynamic the broader AI industry will watch closely as similar agreements are negotiated globally.
For any query contact us at contact@cipherssecurity.com

