How AI shaped the Iran-Israel 12-day war

The 12-day Iran-Israel conflict from June 13 to 24, 2025, will be remembered not for its duration or casualties alone, but for how Artificial Intelligence (AI) moved from supportive background tool to the centrepiece of an AI-enhanced command-and-control architecture. These systems—employed in real-time intelligence processing, target prioritisation, and digital influence campaigns—reshaped the tempo and character of warfare.
Among them, Palantir Technologies, a US-based analytics firm, was publicly acknowledged to have strategic partnerships with Israel's Ministry of Defence and was reported to provide battlefield software used in operational planning and intelligence fusion (The Jerusalem Post, June 17, 2025).
The US, both covertly through intelligence collaboration and overtly via Operation Midnight Hammer airstrikes on Iranian nuclear facilities, played a central role in orchestrating key operations and reinforcing this AI-driven battlefield (Politico, June 20, 2025). What emerged was a transnational, algorithmically coordinated military campaign—situated in Tel Aviv and driven through Palantir dashboards in forward centres, underlaid by strategic coordination from Washington, and contested by Iran from Tehran.
This was not simply a regional skirmish. It sent a global signal. The conflict confirmed that AI is now a full-spectrum actor in geopolitics, fundamentally redefining command and control, upsetting traditional deterrence, and challenging the boundaries of human judgement in decisions of war and peace.
From Gaza to Isfahan: The AI playbook expands
Israel, long considered a pioneer in military AI, adapted lessons from its operations in Gaza to a broader and more complex battlefield. While the now-infamous "Lavender" database—which reportedly profiled some 37,000 individuals for targeting in Gaza using AI-driven heuristics—was not directly deployed in the Iran campaign, Israeli forces relied on similar AI-driven systems for target identification and prioritisation (+972 Magazine, April 2024).
These systems integrated satellite imagery, signals intelligence, and prior surveillance data to help guide strikes on missile sites in Isfahan, air defence installations near Natanz, and suspected drone command centres. In short, while the database itself may not have crossed the border, the methodology and algorithmic logic it embodied certainly did—marking a continuity in Israel's evolving AI-led military doctrine.
AI-assisted satellite imagery analysis and communications intercepts helped identify and prioritise high-value Iranian targets. Israel's elite Unit 8200, known for cyber-espionage and signal intelligence, reportedly worked closely with US intelligence agencies—a collaboration widely acknowledged but never officially confirmed (The Times of Israel, June 18, 2025)—to coordinate targeting algorithms and assess Iranian response patterns.
These were not mere technical assistance arrangements. The US's involvement was both covert and overt. Intelligence-sharing with Israel had accelerated in the lead-up to the strikes (New York Times, June 21, 2025). Pentagon cyberwarfare units reportedly helped run simulations and predictive modelling on potential Iranian retaliation scenarios (Defense One, June 19, 2025). When the airstrikes commenced, they did so with a transnational AI-enhanced framework already in place.
Iran's asymmetric AI response
Iran, though technologically behind, showed how asymmetry combined with AI can disrupt even a highly digitised adversary. Its use of Shahed-136 drones was not new, but this time they were deployed in greater volume and with more coordinated timing (Al Jazeera, June 22, 2025). While lacking advanced autonomous navigation, their integration with basic AI routines—such as visual recognition to avoid decoys—represented a low-cost, high-impact evolution in drone warfare.
Perhaps more disruptive was Iran's use of AI-generated content and narrative warfare. Deepfake videos of Israeli military officials, AI-scripted propaganda clips, and bot-driven amplification campaigns flooded Arabic and Persian social media spaces (Brookings Institution, June 2025). While Israel ran its own digital counter-narratives, this battle for perception was conducted algorithm to algorithm, not just government to government.
Iran also tapped into open-source intelligence (OSINT), using publicly available data—especially from Israeli reservists' social media posts—to monitor troop mobilisations and infer targeting priorities (Reuters Special Report, June 2025). These tactics underscored how AI now weaponises even the most banal digital footprints.
AI at the core of defence and attack
Israel's Iron Dome and David's Sling missile defence systems, already world-class, were pushed to new levels of responsiveness. While there is no public confirmation of major new AI upgrades to these systems, reports suggest that machine learning was used to optimise interception prioritisation during peak missile salvos (Haaretz, June 2025), reducing overkill and improving resource management.
Anti-drone systems like "Smart Shooter" were activated across northern and central Israel, demonstrating how computer vision and human-in-the-loop design can still function under swarm conditions. Iran's mass launches did not overwhelm Israeli defences, but they did reveal a cost-effectiveness gap: while Iran lost low-cost drones, Israel had to expend expensive interceptors (Defense News, June 24, 2025).
In cyberspace, the Cyber Dome system, developed after years of Iranian and Hezbollah infiltration attempts, neutralised dozens of coordinated cyberattacks during the conflict, according to Israeli cyber officials (Israel National Cyber Directorate, June 25, 2025). But Israeli infrastructure was still hit. Several water facilities and municipal services experienced disruptions, some attributed to Iranian cyber groups with suspected Russian software support (The Guardian, June 23, 2025).
War rooms, simulations, and the creep of automation
In the Israeli war room, AI didn't just aid decision-making—it framed it. Military planners reportedly used predictive models to simulate thousands of Iranian retaliation scenarios. These simulations helped determine strike sequences and optimal timing—balancing operational success with political optics (Haaretz, June 25, 2025).
While final strike decisions remained under human command, AI-informed simulations carried significant weight. As one retired Israeli colonel noted in Haaretz, "When the machine tells you there's an 86 percent chance Iran will not respond to a specific strike, that shapes how you advise the cabinet."
Yet this reliance introduces profound vulnerabilities. Predictive models, no matter how sophisticated, operate on historical data, limited inputs, and probabilistic logic. A single misfire—whether from incorrect assumptions or adversarial deception—could misguide decision-makers into a catastrophic escalation. In an environment where minutes count and signals are noisy, an AI's false sense of certainty may lull human actors into overconfidence, eroding the caution traditionally built into military deliberation.
In Tehran, AI tools were used more sparingly but not insignificantly. Iranian media campaigns were shaped by sentiment analysis tools tracking how global audiences responded to images, videos, and hashtags (Middle East Eye, June 23, 2025). Even Iran's decision to target US bases in Qatar—later walked back after Washington's direct warning—was reportedly gamed through a basic AI-based escalation-risk model (Financial Times, June 24, 2025).
Illusion of victory, reality of loss
As the war wound down after 12 exhausting days, each side claimed success, but the reality was more sobering. Iran's nuclear facilities were damaged but not destroyed. Israel's deterrence was reaffirmed, but only at the cost of caveats, international condemnation, and increased domestic polarisation. The US, having helped orchestrate and stabilise the conflict behind the scenes, emerged diplomatically weakened in the Global South, where perceptions of American double standards hardened (Foreign Affairs, June 26, 2025).
What did not emerge diminished was the role of AI itself. It triumphed—not by design, but by consequence. Its centrality in targeting, defending, simulating, and persuading made clear that wars are no longer shaped by generals alone, but by engineers and coders working in data centres far from the battlefield.
Global implications
The Iran-Israel conflict has not gone unnoticed by other major powers. China, already testing AI-enabled battlefield logistics and drone swarms, is closely studying the integration of algorithmic decision-making into active conflict scenarios. Russia, with its hybrid warfare experience in Ukraine and Syria, has reportedly accelerated the development of autonomous systems for electronic warfare and information operations.
The 12-day war served not only as a testbed but also as a model—demonstrating the disruptive capacity of AI not just to execute operations, but to shape them from planning to perception. As these technologies proliferate, so too does the risk of global military doctrines adapting in similarly opaque and unregulated ways.
The urgency of control
The Iran-Israel war of June 2025 was not an outlier. It was a blueprint. As AI becomes more embedded in military doctrine worldwide, the absence of international regulatory norms is no longer just dangerous—it's existential.
We urgently need a new Geneva-like framework for algorithmic warfare. That includes: i) Banning fully autonomous lethal weapons; ii) Mandating human oversight in AI-assisted strike systems; iii) Prohibiting AI-generated disinformation during armed conflict; and iv) Establishing an international AI-military audit body.
Without such controls, the next conflict may escalate not by political miscalculation, but by feedback loops between duelling algorithms—the digital equivalent of sleepwalking into war.
When the algorithm writes the aftermath
The Iran-Israel conflict was marked by devastation, confusion, and strategic ambiguity. But it also marked something subtler and far more enduring: the quiet displacement of human judgement by machine logic. While Iran, Israel, and the US all walked away weakened or chastened, AI emerged stronger, more embedded, and more ungoverned.
As we reflect on the costs of those 12 days, we must ask not only who fired the first shot or signed the last truce—but who, or what—is writing the next chapter of military history. The answer may not be found in a capital or bunker—but in a server rack humming quietly in the background, running simulations that never sleep.
Dr Faridul Alam is a retired academic who writes from New York, US.
Views expressed in this article are the author's own.
Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.
Comments