Terminator
I thought it'd be a cool story to interview academics and robotics professionals about the popular notion of a robot takeover, but four big names in the area declined to talk to me. A fifth person with robo street cred told me on background that people in the community fear that publicly talking about these topics could hurt their credibility, and that they think the topic has already been explained well enough.
This is a problem. A good roboticist should have a finger on the pulse of the public's popular conception of robotics and be able to speak to it. The public doesn't care about "degrees of freedom" or "state estimation and optimization for mobile robot navigation," but give a robot a gun and a mission, and they're enthralled.
More importantly, as I heard from the few roboticists who spoke to me on the record, there are real risks involved going forward, and the time to have a serious discussion about the development and regulation of robots is now.
Screenshot
Most people agree that the robot revolution will have benefits. People disagree about the risks.
Author and physicist Louis Del Monte told us that the robot uprising "won't be the 'Terminator' scenario, not a war. In the early part of the post-singularity world - after robots become smarter than humans - one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We'll see the machines as a useful tool."
Screenshot
Frank Tobe, editor and publisher of the business-focused Robot Report, subscribes to the views of Google futurist Ray Kurzweil on the singularity, that we're close to developing machines that can outperform the human mind, perhaps by 2045. He says we shouldn't take this lightly.
"I've become concerned that now is the time to set in motion limits, controls, and guidelines for the development and deployment of future robotic-like devices," Tobe told Business Insider.
"It's time to decide whether future robots will have superpowers - which themselves will be subject to exponential rates of progress - or be limited to services under man's control," Tobe said. "Superman or valet? I choose the latter, but I'm concerned that politicians and governments, particularly their departments of defense and industry lobbyists, will choose the former."
Kurzweil contends that as various research projects plumb the depths of the human brain with software (such as the Blue Brain Project, The Human Brain Project, and the BRAIN Initiative), humankind itself will be improved by offshoot therapies and implants.
"This seems logical to me," Tobe said. "Nevertheless, until we choose the valet option, we have to be wary that sociopathic behaviors can be programmed into future bots with unimaginable consequences."
Screenshot
Calo adds, however, that we should watch for warnings leading up to a potential singularity moment. If we see robots become more multipurpose and contextually aware then they may then be "on their way to strong AI," says Calo. That will be a tip that they're advancing to the point of danger for humans.
Calo has also recently said that robotic capability needs to be regulated.
Andra Keay, managing director of Silicon Valley Robotics, also doesn't foresee a guns a' blazin' robot war, but she says there are issues we should confront: "I don't believe in a head-on conflict between humans and machines, but I do think that machines may profoundly change the way we live and unless we pay attention to the shifting economical and ethical boundaries, then we will create a worse world for the future," she said. "It's up to us."
REUTERS/Gleb Garanich
When asked if if the singularity would look like a missing scene from "Terminator" or if it would be more subtle than that, Heraud said, "Much more subtle. Think C-3PO. We don't have anything to worry for a long while."
Regardless of the risk, it shouldn't be controversial that we need to discuss and regulate the future of robotics.
Northwestern Law professor John O. McGinnis makes clear how we can win the robot revolution right now in his paper, "Accelerating AI" [emphasis ours]:
Even a non-anthropomorphic human intelligence still could pose threats to mankind, but they are probably manageable threats. The greatest problem is that such artificial intelligence may be indifferent to human welfare. Thus, for instance, unless otherwise programmed, it could solve problems in ways that could lead to harm against humans. But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.
Long before any battle scenes ripped from
Forget the missiles and lasers - the only weapons of consequence here will be algorithms and the human minds creating them.
***
We asked our interview subjects for book and movie recommendations that pertain to this topic. Their responses are below.
Ryan Calo: "I would recommend 'The Machine Stops' by E.M. Forster for an eerie if exaggerated account of where technology could take the human condition."
Frank Tobe: "The James Martin Institute for Science and Civilization at the University of Oxford produced a video moderated by Michael Douglas entitled 'The Meaning of the 21st Century' and wrote a book with the same title. It might be worth your time to watch the short version: 'Revolution in Oxford'."
Andra Keay: "I enjoy Daniel Wilson's books, but also the sci-fi of Octavia Butler and other writers who delve into the different inner lives that simple changes in biology create, whether human, alien, or robot."
Jorge Heraud: "'Star Wars'."