AI and war: Are algorithms reshaping the battlefield and the future of military conflicts

In the latest strikes on Iran, the US military reportedly used artificial intelligence giant Anthropic’s Claude to support Operation Epic Fury against Tehran. This marks a milestone in yet another uncomfortable union of artificial intelligence and armed conflicts.

While such reports are often buried deep within defence communiques, they point to a striking reality that AI is no longer a sci-fi fringe in warfare. It is at the very centre.

On the other side of the world, for Ukraine and Russia, this has become a literal race. Outnumbered and outgunned in traditional terms, Ukraine embraced drones early in its defence against Moscow’s invasion. Over time, the skies over the frontlines have become dominated by unmanned systems.

Estimates suggest up to 70-80 per cent of battlefield casualties are now caused by drones, a grim testament to their lethal effectiveness. With many Ukrainian uncrewed aerial vehicles (UAVs) built from commercial parts and open-source software, Kyiv has turned cheap ingenuity into an advantage in attrition warfare, forcing Russia to follow suit and innovate rapidly.

Ukraine- Russia: The AI war

The drone contest between Ukraine and Russia has become a furious cycle of adaptation and counter-adaptation. Electronic warfare units on both sides scramble to jam and deceive enemy UAVs. In response, engineers have adopted fibre-optic control links and other techniques to evade interference.

But the battle doesn’t stop at hardware. The next frontier is AI-enabled autonomy, allowing drones to identify, prioritise, and strike targets with minimal human intervention, even amid intense jamming.

Ukraine has also expanded experimentation to the ground. Recent large-scale trials involved over 70 types of domestically developed unmanned ground vehicles (UGVs), tested under extremes of distance, terrain, and electronic warfare. Many exceeded expectations, and some are already in action with elite units, hinting at a future where fully autonomous surface combatants patrol alongside human troops.

At the same time, Ukraine’s Drone Line project seeks to create a detection-and-kill chain extending up to 40 kilometers, integrating aerial reconnaissance with ground-based response. The aim is clear: to make it virtually impossible for opposing forces to move without detection. In this environment, algorithms are not just tools; they become watchkeepers.

The rise of killer robots

No discussion of AI and war would be complete without addressing the spectre of “ killer robots”, autonomous weapons systems that can select and engage targets without meaningful human control. Despite dramatic headlines, most current AI in military use is narrow and task-specific, with human operators still in the loop.

Yet as capabilities grow, so do ethical and legal concerns. Can an algorithm reliably distinguish a combatant from a civilian? Who bears responsibility for an autonomous system’s lethal choice?

When Russia’s President Vladimir Putin warned that the nation leading in artificial intelligence would “rule the world,” it sounded like bravado. Yet his words now echo with chilling accuracy. The race to weaponise AI has become the defining struggle of modern warfare, blurring the line between innovation and annihilation.

From Washington to Beijing, and Riyadh to London, military budgets are being rewritten to prioritise machines that can think, decide, and, one day, kill without human intervention.

Here are a few AI investments for military purposes by different nations. China has made the most ambitious bet, pledging $150 billion to dominate the AI landscape. The United States follows with $4.6 billion in defence-related AI spending, while Russia lags far behind at $181 million.

The United Kingdom, not to be left behind, has poured £415 million into its Protector drone programme. At the same time, Saudi Arabia, a newer but eager entrant, invested $69 billion, nearly a quarter of its national budget, in defence in 2023. Riyadh’s planned $40 billion AI fund could soon make it the world’s single largest investor in artificial intelligence.

The growing obsession with AI-driven warfare is transforming the nature of combat. The pace of development hints at a disturbing future where drones may no longer need human consent to strike. Instead, they might react instantly, guided by lines of code rather than conscience.

This mechanised logic threatens to remove negotiation, hesitation, and mercy from the battlefield, leaving only algorithmic retaliation.

The United States Department of Defense now predicts that by 2035, uncrewed aircraft will constitute 70 per cent of its air fleet. The United States, the United Kingdom, and Israel remain the heaviest users of armed drones, fielding the Predator and Reaper models built by General Atomics, which have seen near-daily use in Syria.

Israel, meanwhile, has perfected drones for surveillance and precision strikes in Gaza, pushing the boundaries of unmanned warfare even further.

This technological momentum is contagious. Nearly every Nato member state now operates drones. Turkey and Pakistan have emerged as new manufacturers, while China exports its Wing Loong and CH-series drones to the UAE, Egypt, Saudi Arabia, Nigeria, and Iraq. Even non-state actors such as Hezbollah and Hamas now deploy drones for reconnaissance and attack missions, eroding the monopoly once held by nation-states over advanced weaponry.

AI is accelerating this diffusion. Reports suggest that Ukraine has equipped long-range drones with AI systems capable of autonomously identifying terrain and targeting enemy assets, including successful strikes on Russian oil refineries. Israel has deployed its “Lavender” AI system to identify tens of thousands of potential Hamas targets in Gaza, leading some analysts to call it the first true “AI war”.

Despite this rapid progress, there is no verified instance of a fully autonomous weapon, an Autonomous Weapons System (AWS), operating without human control. But the line is thinning. Each technological breakthrough edges warfare closer to a scenario where machines decide who lives and who dies.

The ethical implications are terrifying. AI systems lack the moral nuance humans bring to distinguish between combatants and civilians. Algorithms, however advanced, cannot comprehend the chaos, fear, and desperation that define war.

They only process input and output. As militaries integrate AI deeper into command systems, the risk grows that future conflicts could be fought by machines that do not understand the human cost of their calculations.

International law, already lagging behind cyber warfare, is ill-equipped to govern this next phase. The current frameworks provide little clarity on accountability or oversight when an autonomous system kills. Who is responsible, the engineer, the operator, or the algorithm itself? The truth is, no one knows.

In the end, the race for killer robots is not merely about military supremacy. It is about who controls the ethics of tomorrow’s warfare, and whether humanity can keep its hand on the trigger before machines take it away entirely.

War in an AI world: More than just machines

But AI’s impact on warfare goes beyond autonomous weapons. There is a deeper, more pervasive transformation underway, the shift from AI in warfare to warfare in an AI world. The former refers to using AI as a technology in combat. The latter describes how conflict changes when societies, military systems, and command infrastructures themselves are dependent on algorithms.

In this new reality, data becomes both terrain and target. Control of satellite feeds, logistics networks, and real-time battlefield data can determine outcomes as decisively as troop movements. Disrupting an enemy’s data streams or corrupting their models, what some strategists call algorithmic deterrence, can be as powerful as traditional firepower.

This shift also extends into the cognitive domain. From deepfakes that mislead public perception to algorithmically driven social manipulation, conflict now includes battles for hearts and minds long before bullets fly. Early in Ukraine’s war, a fake video of President Volodymyr Zelensky appeared online, urging surrender. Similar disinformation campaigns have circulated in other conflicts, underscoring how easily AI can blur the line between truth and falsehood.

Global AI ecosystem warfare is another front. Nations covet dominance in semiconductor supply chains, cloud infrastructure, and foundational AI models. Attacks on such infrastructure can cripple a rival’s technological edge. Recent disruptions to critical facilities due to major cloud providers withdrawing services illustrate how dependent modern systems have become on a handful of tech platforms.

The integration of AI into warfare is no longer in the future; it is happening now. From battlefield autonomy to strategic data domination, the algorithms of war are shaping not just how battles are won, but how nations secure their futures.

As this new era of warfare unfolds, it could demand not only technological ingenuity but ethical clarity, legal foresight, and, perhaps most importantly, a renewed commitment to ensuring that human judgment remains at the heart of decisions over life and death.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed