+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

The Moral Implications Of Robots That Kill

Jun 5, 2014, 19:08 IST

Andrew Innerarity / ReutersATLAS, a robot by Boston Dynamics

Lethal autonomous weapons - robots that can kill people without human intervention - aren't yet on our battlefields, but the technology is right there.

Advertisement

As you can imagine, the killer robot issue is one that raises a number of concerns in the arenas of wartime strategy, morality, and philosophy. The hubbub is probably best summarized with this soundbite from The Washington Post: "Who is responsible when a fully autonomous robot kills an innocent? How can we allow a world where decisions over life and death are entirely mechanized?"

They are questions the United Nations is taking quite seriously, discussing these issues in-depth at a meeting last month. Nobel Peace Prize laureates Jody Williams, Archbishop Desmond Tutu, and former South African President F.W. de Klerk are among a group calling for an outright ban on such technology, but others are skeptical about that method's efficacy as there's historical precedent that banning weapons is counterproductive:

While some experts want an outright ban, Ronald Arkin of the Georgia Institute of Technology pointed out that Pope Innocent II tried to ban the crossbow in 1139, and argued that it would be almost impossible to enforce such a ban. Much better, he argued, to develop these technologies in ways that might make war zones safer for non-combatants.

Arkin suggests that "if these robots are used illegally, the policymakers, soldiers, industrialists and, yes, scientists involved should be held accountable." He's quite literally suggesting that if a robot kills a person outside its rules or boundaries, the people involved in that robot's creation are responsible, but here's his hedge from a 2007 book called "Killer Robots":

Advertisement

"It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield. But I am convinced that they can perform more ethically than human soldiers."

This is one of several issues we'll have to resolve as technology continues to develop like a runaway train.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article