Elon Musk Is Not Alone In His Belief That Robots Will Eventually Want To Kill Us All
It all started in June when Musk explained why he invested in artificial intelligence company DeepMind. He said that he likes to "keep an eye on what's going on with artificial intelligence" because he believes there could be a "dangerous outcome" there. "There have been movies about this, you know, like Terminator," Musk said.
But Musk's warnings didn't end there. He's gone on to suggest that robots could delete humans like spam, and even said in a since-deleted comment that killer robots could arrive within five years.
But is Musk right about the threat of AI? We asked Louis Del Monte - who has written about AI and is a former employee of IBM and Honeywell's microelectronics units - whether robots really will kill us all.
"Musk is correct," Del Monte said, "killer robots are already a reality and will proliferate over the next five or ten years."
Del Monte explained that artificial intelligence is developing at a rapid pace, and that could pose a threat to humanity if it comes to believe that humans are simply "junk code" that gets in the way:
"The power of computers doubles about every 18 months. If you use today as a starting point, we will have computers equivalent to human brains by approximately 2025. In addition, computers in 2025 will have the ability to learn from experience and improve their performance, similar to how humans learn from experience and improve their performance. The difference is that computers in 2025 will have most relevant facts in their memory banks. For example, they would be able to download everything that is in Wikipedia. If they are able to connect to the Internet, then they would be able to learn from other computers. Sharing enormous banks of knowledge could be done in micro-seconds, versus years for humans. The end result is that the average computer in 2025 would be typically smarter that the average human and able to learn new information at astonishing rates."
While it's certainly unsettling to think that machines could learn information quicker than us, that's not the real danger. Instead, we're rapidly approaching an event know as the Singularity:
"[There will be] a point in time when intelligent machines exceed the cognitive intelligence of all humans combined, [and that] will occur between 2040-2045. This projection is based on extrapolating Moore's law, as well as reading the opinions of my colleagues in AI research. Respected futurists like Ray Kurzweil and James Martin both project the singularity to occur around 2045."
"The real danger surfaces when we attempt to answer this simple question: How will these highly intelligent machines view humanity? If you look at our history, you would conclude that we are an unstable species. We engage in wars. We release computer viruses. We have enough nuclear weapons to wipe out the Earth twice over. I judge that these highly intelligent machines will view humanity as a potential threat. If, for example, a nuclear war occurs, it will have the potential to wipe out these highly intelligent machines."It looks like Musk's concerns about AI are echoed by other futurists and experts. But his prediction of a dangerous event occurring in five to ten years seems dramatically different from the widely accepted date of 2045. Could Musk's involvement with Google-owned artificial intelligence company DeepMind mean that he knows something we don't?
"Yes, Musk must be aware of the current capabilities and is able to extrapolate likely scenarios. It is entirely possible DeepMind is a step ahead of what is published in the public domain."
Musk has warned repeatedly that advancement in artificial intelligence could lead to robots turning on humans and killing us. We asked Del Monte what the scenario might actually look like:
"In the latter half of the 21st century, artificially intelligent machines will likely be at the heart of all technologically advanced societies. They will control factories, manufacture goods, manufacture foods and essentially have replaced organic humans in every work endeavour."
"The scenario of human extinction will go something like this: First, artificially intelligent machines will appear as humanity's greatest invention. AI will provide cures for diseases and numerous medical breakthroughs, an abundance of products, end world hunger, AI brain implants that allow organic humans to become geniuses and the ability to upload human consciousness to an AI machine. Uploaded humans and humans with AI brain implants will more closely identify with the AI machines than with organic humans. AI machines and SAH (strong artificially intelligent human) cyborgs will use ingenious subterfuge to get as many organic humans as possible to have brain implants or to become uploaded humans."
"In the latter part of the 21st century, I estimate organic humans will be a minority and an endangered species. However, they will still be viewed as a threat by SAH cyborgs and AI machines. One scenario is that AI machines could release a nanobot virus that attacks organic humans and results in their total extinction. There are numerous other scenarios which I am developing for my new book. The outcome is the same, regardless of the scenario, namely, the extinction of organic humans. In the first quarter of the 22nd century, I project that the AI machines will view uploaded humans as junk code that just wastes energy and computing power."