Ukraine drone warfare reignites the race for killer robots

The nearest the world has seen to a fully autonomous weapon in the Ukrainian conflict to date comes from Britain, according to Justin Bronk, a senior research fellow for air power and technology at the Royal United Services Institute (RUSI). 

He says the Brimstone missile, made by Franco-British company MBDA, operates with something approaching true independence from humans, paving the way for the weapons of the future.

Describing it as a “fire and forget” missile, Bronk says Brimstone One, the first variant of the weapon which experts believe has been supplied to Ukraine by the UK, can “go and find and kill tanks in the air or other vehicles in the target grid” assigned by its operators. This means soldiers tell Brimstone an area they want it to search for and destroy enemy tanks, which the missile then does.

“Not only do they go and find and kill tanks … or other vehicles in the target grid that you’ve given them,” says Bronk, but they “also communicate between each other to make sure that they don’t double up on targets.”

King’s College’s Prof Payne explains that autonomy in weapon systems isn’t just about having a decision-making robot zooming over battlefields: “It’s using AI or machine learning in the various stages of what’s gruesomely called the kill chain. It’s the use of AI to detect a signature from a target, then the use of AI to autonomously analyse that signature to determine what is in bounds.”

That kind of analytical capability tends to be deployed in drones that sit above the battlefield waiting for targets of opportunity — loitering munitions. 

These type of technologies deployed by the Russians may not be working as well as they ought to, according to RUSI’s Bronk: “They seem to fail a lot. A lot of them are just falling out of the sky without detonating.”

Dr Kristian Gustafsson of Brunel University’s intelligence and security studies department says Russia is rumoured to have been working on AI-guided missiles that can decide to switch targets mid-flight since 2017, but points out that this merely mirrors technology that already exists in Western weaponry.

One area that AI might be making leaps and bounds in, and where the experts think the greatest risk exists, is in decision-making tools built for military commanders. 

These systems hoover up battlefield intelligence, whether from pictures, video, social media feeds or written reports filed by soldiers, and analyse them to help perform.

Primitive forms of autonomous military decision-making have existed for decades. 

Bronk describes Russia’s Soviet-era Dead Hand nuclear command system, for instance, capable of launching nuclear missiles with no human input if its sensors detect a hostile nuclear missile. The idea being to allow a strike-back capability if Russia’s leadership was wiped out. As Bronk says, Dead Hand is “a really good way to inadvertently blow up the world.”

“People inherently trust machines,” he continues, highlighting one of the ethical questions holding back weapons designers and military artificial intelligence engineers. 

“If the AI says ‘if you do this many of your own will die, but the end result will be less casualties’… if you follow that advice, are you then liable for the deaths that were caused directly by your actions?”

Experts say the current drive for greater autonomous battlefield technology is the need for smart missiles, drones and bombs to carry on doing something predictable when they lose communication with their human controllers.

“War is a great accelerator of technological development, always has been,” observes Brunel’s Dr Gustafsson. Without increasingly sophisticated targeting technology on board, he argues, AI-driven bombs and missiles might end up striking friendly forces on the battlefield – something no Western military commander is going to put up with for long. 

“Not really Terminator T-1000, but rather more avoiding blue-on-blue,” he says, using the military term for friendly fire.

In the West, it seems unlikely we’ll see truly autonomous weapons in the near future. For all the increasing reliance on AI in weapon systems and the advances being made in computer science, there’s a mixture of practical, pragmatic and ethical reasons for why killer robots will not be taking over the world any time soon.

Previous Taser-equipped drones could prevent school shootings, firm c…
Next EXCLUSIVE-Axon halts Taser drone work as some on ethics pane…

Check Also

Amazon to launch drone-based delivery in West Valley

Since expanding the program into Texas, Amazon has already delivered thousands of items using drones. …

Eyes in the Sky, Boots Still on the Ground!

Eyes in the Sky, Boots Still on the Ground! Apr 22, 2024 Dem Boys Seh, …