+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

7 Reasons Why Elon Musk Is Wrong To Believe Intelligent Robots Will One Day Kill Us All

Jan 21, 2015, 23:17 IST

jdlasica / flickrDon't worry, Elon. It's going to be just fine.

A panel at the World Economic Forum at Davos in Switzerland has just completely dismantled the idea - currently trendy in the tech sector - that artificially intelligent robots, lacking morals, may one day independently decide to start killing humans.

Advertisement

The idea has been spread, somewhat tongue in cheek, by Tesla and SpaceX founder Elon Musk, who has even suggested that the robots may even thwart any humans who try to escape them by blasting off to Mars.

AI research is advancing rapidly inside private companies right now like Facebook and Google. That R&D is mostly a secret, which is why people like to speculate about it. Plus, everyone loves the Terminator movies, in which killer AI robots are the main protagonists.

The panel was hosted by two UC Berkeley professors, Ken Goldberg (who studies robotics) and Alison Gopnik (who studies psychology). They have both been trying to figure out how machines might mimic human thinking. The good news, for Musk and anyone else afraid of the imminent robot apocalypse, is that machines are still way too stupid to be lethal to humans on any meaningful level.

BI / Jim EdwardsAlison Gopnik and Ken Goldman of UC Berkeley.

They described seven good reasons why humans are going to remain a step ahead of AI for the foreseeable future.

Advertisement

  1. Machines cannot learn from random "life" experience. "It is easier to simulate a grand master chess player with a machine that is it to simulate a 2-year-old child," said Gopnik. Her point is that while chess may seem complicated, it actually only requires a defined set of rules to learn. A child on the other hand is exposed to an infinite variety of random stimuli and learns quickly from it. Computers currently cannot learn from random inputs.
  2. Machines need humans to be smart. The most intelligent machines we have are those that receive constant input from humans. One of the most impressive learning machines is Google's search engine, Goldberg said, which learns because it is constantly being "fed" by the web activity of millions of humans. It then iterates its results from those inputs. In fact, Goldberg suggested, even the most intelligent robots may one day need to have a "human customer service" function so that they can call a human for help whenever they encounter something they do not understand. He noted the irony that when humans currently call companies for customer service, they are frequently greeted by robots. Goldberg proposed a name for this human-machine interaction, "multiplicity," which he is hoping will replace the term "singularity," which is currently used to describe independent AI learning machines.
  3. Machines cannot make jokes. Computers are bad at certain types of non-logical thinking that most define humans, Goldberg says. "I don't think we'll ever hear a robot telling a great joke in my lifetime."
  4. Machines cannot be creative: They can't do art or aesthetics. Only humans excel at tasks that posit an infinite realm of possibilities, in which a person must choose a course of action that has a high chance of success without knowing the answer in advance. Composing music is an example of this.
  5. Machines cannot have new ideas. Computers cannot think of a new idea on their own nor change an idea they already have, Gopnik says. This will keep machines on a leash for a long time.
  6. Machines cannot play. Play involves using creativity as a strategy to fulfil a goal, and machines can't do it.
  7. Stupid humans are way more dangerous than smart machines. It is far more likely that a human will be killed by a dumb machine made by a stupid human than a smart machine making its own decisions. Gopnik gave the example of autonomous weapons or stock market software. Those devices aren't intelligent but they can be incredibly dangerous in the hands of stupid humans.

Gopnik did have one thing to say to anyone who worries that humans will end up inside The Matrix, from the movies in which people think they are having a good time when in fact all they are doing is feeding artificially intelligent machines. "It's actually [already] just true," she said. Her favourite example is cat photos. One of the phenomena that AI scientists get most excited about is the improvement in the ability of a machine to recognise a picture of a cat on the internet. This step forward in AI has occurred because the internet has amassed an astonishingly rich collection of cat photos, uploaded by humans. AI software uses all those photos to refine its ability to distinguish a cat from non-cat objects

"We're just feeding the machines," she says.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article