+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Military tech is racing towards a dangerous AI future, and Russia's war in Ukraine is paving the way, drone experts say

Jan 26, 2023, 21:34 IST
Business Insider
A Ukrainian soldier watches a drone feed from an underground command center in Bakhmut, Donetsk region, Ukraine, December 25, 2022.Libkos/AP Photo
  • The war in Ukraine has set the stage for the unprecedented applications of autonomous drones in warfare.
  • The increasing ability for drones to fight autonomously is raising serious ethical questions.
Advertisement

Russia's war in Ukraine provides an unprecedented testing ground for lethal drone technology. But experts are voicing their concerns over the creep of artificial intelligence over human decision-making in warfare.

Unmanned aerial vehicles, or UAVs — more commonly known as drones — have come to the fore in Ukraine more than in any other past conflict.

They're not the "decisive factor," Ingvild Bode, at the University of Southern Denmark (SDU), told Insider. Tanks, trench warfare and artillery still dominate the conflict, she says, but in terms of sheer scale and variety, it's the first time they've been used to such consequence. Insider's John Haltiwanger has recently reported on this development.

While the governing AI for some drone technology is developing at pace, there are still few international legal norms in place to define the extent to which human involvement is required.

Several manufacturers of drones being used in the conflict — such as Russia's Lancet and the US-provided Switchblade — claim their machines now have autonomous or semi-autonomous capabilities.

Advertisement

It's unclear exactly how much these capabilities have been brought to bear in Ukraine. One Ukrainian troop commander in October claimed his troops were already flying autonomous drone scouting missions, as the New Scientist reported, citing Ukrainian media.

According to Bode — whose research focuses on the international norms around the emergence of military AI — the rush to defend Ukraine has meant that the discussion around weaponizing AI is no longer a distant hypothetical.

After claiming a naval drone strike on Russia's Black Sea fleet last fall, Ukraine began fundraising for what it called "the world's first naval fleet of drones."

And Russia's faltering traditional attack will likely push it to rush systems to the field before international norms are in place.

Bode says she's worried about the dash to get military AI onto the battlefield "without really thinking about the long-term consequences."

Advertisement

When is a swarm not a swarm?

Since mid-summer last year, Russian forces have launched Iranian Shahed-136 drones in groups of five or six, in strikes on critical Ukrainian infrastructure.

The Shahed-136s are munitions that self-destruct on impact, giving them the moniker of "suicide" drones.

A drone, considered by Ukrainian authorities to be an Iranian-made Shahed-136, over Kyiv on October 17, 2022.REUTERS/Roman Petushkov
Attacks involving groups of these weapons have been described as "swarms," though from an AI standpoint they don't yet involve true swarming technology.

James Rogers, also of SDU and who advises the UN on future drone technology, told Insider that "swarming drones are technically drones which can communicate with one another."

AI allows them to behave symbiotically, "like a flock of birds in the sky," as he put it. "You see them move as one and react together to external stimuli."

Currently, the drones are guided at launch by a human operator, according to independent Russian outlet Novaya Gazeta Europe.

Advertisement

But true swarming capabilities are on their way to the battlefield. In January 2022, Raytheon technology was used in a DARPA exercise in which just one human operator controlled 130 adapted commercial off-the-shelf drones as they swarmed autonomously and surveilled an area, the company claimed.

As part of the exercise, the drones were able to autonomously identify and decide which parts of a building they had not yet explored, Raytheon said.

The loop of control

Switchblade operators are likely "in the loop of control," as Rogers frames it, meaning drones can fly autonomously, circling an area until a target appears, but are otherwise operated by a human.

But more advanced drone technology is enabling what Rogers calls "on" the loop of control.

In this case, a drone can be sent out in a group and will only come to the pilot's attention once it has found a target that AI has identified, Rogers explained in a video for the Center for International Governance Innovation (CIGI).

Advertisement

That's then relayed back to the operator, who makes a decision on whether or not to strike.

But we may soon reach a horizon where no human is involved at all, Rogers said, adding: "Its here, then, that you start to see how AI and robotics can start to take the decision about whether or not humans live or die."

Trust the machine?

Even with a human still firmly in the loop and making decisions, the involvement of AI presents unique problems, Bode told Insider.

"In the case of the systems that we have seen used, there's still a human operator authorizing the use of force," she said. But she questioned if operators put too much stock in the machine's algorithmically-informed judgement.

Ukrainian soldiers launch a drone at Russian positions near Bakhmut, Donetsk region, Ukraine, December 15, 2022.Associated Press
"If this system says: 'Okay, this [target] should be attacked,' on what basis can the operator actually decide to doubt that target prompt?" she asked.

Under pressure and potentially under fire, a drone operator may take the machine's prompt less as a suggestion and more as an infallible instruction. It's human nature, Bode suggested.

Advertisement

"There's lots of research into automation bias," she said. "We tend to trust outputs presented to us by computer assisted systems more than our own judgment."

What do drones see when they see a human?

Military targets like tanks are relatively easy to program a machine to recognize, Bode said. But it's different when a target is human.

Drones are also being involved in more than just airstrikes in Ukraine. In November, Ukraine's Ministry of Defense shared footage it said showed a surrendering Russian soldier being guided towards Ukrainian soldiers by a drone.

The Ukrainian Army even released guidelines for the process, suggesting that drones are taking a regular role in mediating surrenders.

A still from an instructional video issued by the Armed Forces of Ukraine on December 12, 2022 showing Russian soldiers how to surrender to a drone.General Staff of the Armed Forces of Ukraine/Facebook
The process appears to involve commercial drones directly operated by a human. But as the idea of drone-mediated surrender — previously a rare occurrence — becomes normalized, both Bode and Rogers raised concerns about whether AI will really have the capacity to distinguish a surrendering human from a combatant.

"That is a hard judgment call to make even for a human, right?" Bode said.

Advertisement

Rogers suggested it's an untested area in international law. In a fully autonomous future of drone warfare, he asked, will drone AI be programmed "to avoid those who are waving a white flag?"

With the speed of developments related to AI and the conflict in Ukraine, it is one of many questions left unanswered.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article