Between them, the United States and Israel struck more than 2,000 targets within the first 24 hours of their war with Iran.

For even the largest militaries, it is an almost impossible task to identify, select and then precisely locate such a high volume of targets. But the U.S. military had some help. Claude, the “next generation AI assistant” built by Anthropic, was used in the planning of ‘Operation Epic Fury’. This, even though the Department of War recently labeled Anthropic a “supply chain risk”.

Anthropic is one of the world’s leading AI companies. Together with Palantir, another Big Tech company, it has been working since 2024 with the Pentagon to embed its systems in military decision-making – creating what is arguably the operating platform of present-day U.S. warfare and intelligence. Even though secretary of defense Pete Hegseth said the company “delivered a master class in arrogance and betrayal” and that the government would “cease all use of Anthropic’s technology,” the company is too integrated into modern U.S. warfare for it not to be essential to the U.S. attack on Iran. The question might be not whether companies like Anthropic can ringfence their tech but whether the Pentagon might just commandeer it. 

Craig Jones, an academic who studies automated kill chains at the University of Newcastle, has told reporters that “the AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought.” Similar AI systems have been used by Israel to coordinate its bombing campaign in Gaza, which is among the most destructive in human history. 

Among the first hits in the United States and Israel’s aerial bombardment of Iran was the Shajarah Tayyebeh primary school for girls, in the southern town of Minab. It was a Saturday morning, and school was in session. According to Iranian state media, at least 165 people were killed, mostly young girls between the ages of seven and 12. Another 96 were severely injured. Eyewitness and open source intelligence reports corroborate the claims of mass civilian casualties. Both Iran and Israel have denied responsibility. The United States has said it is “looking into” allegations that the school was destroyed by one of its missiles. Maybe, given the volume of the bombardment, they’ve lost track. 

It is too soon to know why the school was targeted – or whether it was an error. Either way, the U.S. military’s reliance on AI raises difficult questions.

AIs get things wrong all the time. Maybe it’s an extra finger in an AI-generated image, or a ‘hallucinated’ reference in a research report. Or, maybe, an algorithm sends a missile to the wrong address. That’s why Anthropic CEO Dario Amodei has said that weapons “that take humans out of the loop entirely and automate selecting and engaging targets” are simply not reliable enough. That position — along with Anthropic’s refusal to allow Claude to be used for mass domestic surveillance (although they are just fine with foreign surveillance) — led to the Pentagon cancelling a $200-million contract with the company on Friday, the day before the attacks on Iran began. The Department of War immediately signed a new deal, minus any ethical guardrails, with OpenAI.

Anthropic’s confrontation with the Pentagon has burnished its reputation as an “ethical” AI company. But it may have found its ethical backbone too late. Critics argue that even within Anthropic’s “red lines”, there is enormous potential for abuse, while a “human in the loop” does not necessarily prevent mistakes — raising questions about who, exactly, is responsible when these mistakes result in fatalities. Francesca Albanese, the United Nations special rapporteur for Palestine, accused Amazon, Google and Microsoft in a 2025 of being “complicit in genocide” for providing cloud storage systems to the Israeli military. Anthropic’s integration into the U.S. military has been much deeper.

While Israel and the U.S. are waging an AI-powered war, Iran is responding with a technological revolution of its own. The Islamic Republic has pioneered the production of low-cost one-way attack drones, most notably the Shahed-136, which costs just $34,000 to produce and as much as $4 million to shoot down. These are battle-tested: Russia has launched an estimated 57,000 Shahed-type drones in its war against Ukraine. Despite U.S. reliance on its own high-tech AI-powered systems, an American version of the Shahed also made its debut, alongside Claude, in the attack on Iran. 

In response, Iran has aimed more than 1000 drones at neighboring Gulf states since the war broke out on Saturday. Hundreds have been shot down, but even the most sophisticated air defences struggle with this sheer volume, and dozens have struck their targets, threatening to prolong this war and cause more damage to U.S. allies than anticipated. It is significant that these targets included at least three Amazon data centers in Dubai and Bahrain. Just last month, Amazon announced that it was making Anthropic’s Claude available to its Middle Eastern customers. Claude experienced two global outages this week — it is not clear if these were related to the data center attacks. 

Tech evangelists promise that artificial intelligence will, one day, cure cancer, end poverty and greatly increase our quality of life. But the new technology’s most obvious impact has been on warfare. For those with access to them, AI systems like Claude make it dramatically easier to bomb hundreds of targets at the same time — and much harder to figure out who is accountable when something goes wrong. On Truth Social, Donald Trump — who has promised to stop wars, not start them — posted approvingly that technology and munitions now mean Wars “can be fought ‘forever,’ and very successfully.” 

As the bombing of Iran continues, we are not far from a time when AI not only parses data to select targets, it actually chooses when to pull the trigger. And advanced AI models have far fewer qualms, for instance, about deploying nuclear weapons than humans faced with similar scenarios. One day, when — if — war crimes investigators are able to pin down exactly who is responsible for killing dozens of young girls in Minab, tech bosses may find themselves implicated alongside military and political leaders. “The AI did it” can’t be their defense.


A version of this story was published in this week’s Coda Currents newsletter. Sign up here.