- The war in Ukraine has set the stage for the unprecedented applications of autonomous drones in warfare.
- The increasing ability for drones to fight autonomously is raising serious ethical questions.
Russia's war in Ukraine provides an unprecedented testing ground for lethal drone technology. But experts are voicing their concerns over the creep of artificial intelligence over human decision-making in warfare.
Unmanned aerial vehicles, or UAVs — more commonly known as drones — have come to the fore in Ukraine more than in any other past conflict.
They're not the "decisive factor," Ingvild Bode, at the University of Southern Denmark (SDU), told Insider. Tanks, trench warfare and artillery still dominate the conflict, she says, but in terms of sheer scale and variety, it's the first time they've been used to such consequence. Insider's John Haltiwanger has recently reported on this development.
While the governing AI for some drone technology is developing at pace, there are still few international legal norms in place to define the extent to which human involvement is required.
Several manufacturers of drones being used in the conflict — such as Russia's Lancet and the US-provided Switchblade — claim their machines now have autonomous or semi-autonomous capabilities.
It's unclear exactly how much these capabilities have been brought to bear in Ukraine. One Ukrainian troop commander in October claimed his troops were already flying autonomous drone scouting missions, as the New Scientist reported, citing Ukrainian media.
According to Bode — whose research focuses on the international norms around the emergence of military AI — the rush to defend Ukraine has meant that the discussion around weaponizing AI is no longer a distant hypothetical.
After claiming a naval drone strike on Russia's Black Sea fleet last fall, Ukraine began fundraising for what it called "the world's first naval fleet of drones."
And Russia's faltering traditional attack will likely push it to rush systems to the field before international norms are in place.
Bode says she's worried about the dash to get military AI onto the battlefield "without really thinking about the long-term consequences."
When is a swarm not a swarm?
Since mid-summer last year, Russian forces have launched Iranian Shahed-136 drones in groups of five or six, in strikes on critical Ukrainian infrastructure.
The Shahed-136s are munitions that self-destruct on impact, giving them the moniker of "suicide" drones.
Attacks involving groups of these weapons have been described as "swarms," though from an AI standpoint they don't yet involve true swarming technology.James Rogers, also of SDU and who advises the UN on future drone technology, told Insider that "swarming drones are technically drones which can communicate with one another."
AI allows them to behave symbiotically, "like a flock of birds in the sky," as he put it. "You see them move as one and react together to external stimuli."
Currently, the drones are guided at launch by a human operator, according to independent Russian outlet Novaya Gazeta Europe.
But true swarming capabilities are on their way to the battlefield. In January 2022, Raytheon technology was used in a DARPA exercise in which just one human operator controlled 130 adapted commercial off-the-shelf drones as they swarmed autonomously and surveilled an area, the company claimed.
As part of the exercise, the drones were able to autonomously identify and decide which parts of a building they had not yet explored, Raytheon said.
The loop of control
Switchblade operators are likely "in the loop of control," as Rogers frames it, meaning drones can fly autonomously, circling an area until a target appears, but are otherwise operated by a human.
But more advanced drone technology is enabling what Rogers calls "on" the loop of control.
In this case, a drone can be sent out in a group and will only come to the pilot's attention once it has found a target that AI has identified, Rogers explained in a video for the Center for International Governance Innovation (CIGI).
That's then relayed back to the operator, who makes a decision on whether or not to strike.
But we may soon reach a horizon where no human is involved at all, Rogers said, adding: "Its here, then, that you start to see how AI and robotics can start to take the decision about whether or not humans live or die."
Trust the machine?
Even with a human still firmly in the loop and making decisions, the involvement of AI presents unique problems, Bode told Insider.
"In the case of the systems that we have seen used, there's still a human operator authorizing the use of force," she said. But she questioned if operators put too much stock in the machine's algorithmically-informed judgement.
"If this system says: 'Okay, this [target] should be attacked,' on what basis can the operator actually decide to doubt that target prompt?" she asked.Under pressure and potentially under fire, a drone operator may take the machine's prompt less as a suggestion and more as an infallible instruction. It's human nature, Bode suggested.
"There's lots of research into automation bias," she said. "We tend to trust outputs presented to us by computer assisted systems more than our own judgment."
What do drones see when they see a human?
Military targets like tanks are relatively easy to program a machine to recognize, Bode said. But it's different when a target is human.
Drones are also being involved in more than just airstrikes in Ukraine. In November, Ukraine's Ministry of Defense shared footage it said showed a surrendering Russian soldier being guided towards Ukrainian soldiers by a drone.
The Ukrainian Army even released guidelines for the process, suggesting that drones are taking a regular role in mediating surrenders.
The process appears to involve commercial drones directly operated by a human. But as the idea of drone-mediated surrender — previously a rare occurrence — becomes normalized, both Bode and Rogers raised concerns about whether AI will really have the capacity to distinguish a surrendering human from a combatant."That is a hard judgment call to make even for a human, right?" Bode said.
Rogers suggested it's an untested area in international law. In a fully autonomous future of drone warfare, he asked, will drone AI be programmed "to avoid those who are waving a white flag?"
With the speed of developments related to AI and the conflict in Ukraine, it is one of many questions left unanswered.