Category: AI Warfare | Pentagon Tech | Palantir & Anthropic Date: April 2, 2026 | 7 min read
To understand why Iran declared Google, Apple, and Meta legitimate military targets — companies that make search engines and social networks — you need to understand what happened inside the US military’s targeting system in the first 24 hours of Operation Epic Fury. The answer involves a commercial AI model, a defence software firm, and a chain of events that collapsed the boundary between Silicon Valley and the battlefield permanently.
The Machine That Processes 1,000 Targets a Day
Palantir’s Maven Smart System, powered by Anthropic’s Claude, generated approximately 1,000 prioritized targets in the first 24 hours of Operation Epic Fury. Thedupreereport
CENTCOM Commander Admiral Brad Cooper confirmed in a March 11 video update: AI tools help US forces sift through vast data in seconds so commanders make decisions faster than the enemy can react. Tasks that once took days now take seconds. Humans still approve the final targets. However, the machine does the analysis. Jammin 99.9
How Claude Gets Into a Classified Targeting System
The pipeline from a commercial AI company to a military strike operation is more direct than most people realize.
Anthropic provided custom safeguarded versions of its tools for government and classified use via partners such as Palantir and Amazon Web Services Top Secret Cloud. In November 2024, Anthropic partnered with Palantir and AWS to supply Claude to US defence and intelligence systems, including classified environments. In June 2025, Anthropic introduced Claude Gov, a version tailored for government and national security workflows. Wionews
Claude is deployed within Palantir’s Impact Level 6 environment, a classified system authorized to handle data up to the “secret” level. Demo recordings show Claude functioning as a natural-language interface, allowing military planners to query intelligence databases and receive tactical summaries in plain English. Thedupreereport
What Claude Actually Does in the System
| Claude’s Role in Maven | What It Does NOT Do |
|---|---|
| Intelligence database queries in plain English | Issue autonomous strike orders |
| Target prioritization and summarization | Control weapons systems directly |
| Battle scenario simulation and outcome modeling | Make lethal decisions without human approval |
| Risk and collateral damage assessment support | Act independently of human operators |
Claude’s role was as a decision-support tool, providing insights, summaries and simulations to human operators. It did not make lethal decisions without humans and did not act as a mastermind of the strikes. Wionews
The Anthropic Paradox: Banned But Still Running
The Trump administration designated Anthropic a “supply chain risk” — the first time this designation, traditionally used against foreign adversaries, was applied to an American company. The formal declaration requires defense vendors and contractors to certify that they don’t use Anthropic’s models in their work with the Pentagon. CNBC
Yet despite the ban: Palantir CEO Alex Karp confirmed on CNBC that Claude is still running inside the targeting system. Claude remains embedded through a six-month phase-out. OpenAI has offered classified network access. Google has deployed AI agents for non-classified military use. No settled rules govern what any of these systems are authorized to do. Jammin 99.9
Why This Makes Every Tech Company a Target
Iran’s logic, while contested under international law, follows a traceable chain: Cloud providers host the AI. AI firms build the models. Defence contractors integrate them. The military uses them to kill Iranian leaders. Therefore — to Iran — every link in that chain is a combatant.
“You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions,” said Craig Jones, an expert on modern warfare. Democracy Now!
Virginia Sen. Mark Warner, top Democrat on the Senate Intelligence Committee, told NBC News the military’s use of AI in targeting raises unanswered questions and said flatly: “This has to be addressed.” Jammin 99.9
It has not been addressed. And until it is, every tech company whose infrastructure, chips, or AI models touch a defence workload anywhere in the world now operates with a new, permanent risk category on its threat register.
Tags: Palantir Maven Smart System · Anthropic Claude Military · AI Warfare 2026 · Pentagon AI Contracts · Operation Epic Fury · AI Targeting Ethics · Claude Gov · Iran IRGC Tech Targets · Defence AI Law