scorecard
  1. Home
  2. tech
  3. Mark Zuckerberg: We shouldn't worry about AI overtaking humans 'unless we really mess something up'

Mark Zuckerberg: We shouldn't worry about AI overtaking humans 'unless we really mess something up'

Biz Carson   

Mark Zuckerberg: We shouldn't worry about AI overtaking humans 'unless we really mess something up'
Tech3 min read

Mark Zuckerberg Axel Springer interview

Daniel Biskup

Fears of the oncoming artificial intelligence revolution are rampant and growing in Silicon Valley.

If it comes down to man versus machine, more tech leaders are starting to worry that it will be the machine that will win. Industry leaders from Elon Musk to Peter Thiel to Reid Hoffman have poured money into a Y Combinator-led project to make sure it doesn't happen.

In an interview with Axel Springer CEO Mathias Döpfner for the German newspaper "Die Welt am Sonntag", Facebook's CEO Mark Zuckerberg doesn't seem to worry much about it, calling Musk's reaction a little more on the hysterical side of things.

Rather, Zuckerberg argues that the machines will only overtake humans if we program them that way. Those chess-beating computers are designed to be that smart, they didn't just learn it, he argues.

"I think that the default is that all the machines that we build serve humans so unless we really mess something up I think it should stay that way," Zuckberg told Döpfner in the interview.

Here's how Facebook's CEO sees the coming rise of robots and machines, and why it's not as scary as some make it out to be:

Mathias Döpfner: How will Artificial Intelligence change society?

Mark Zuckerberg: From my experience, there are really two ways that people learn. One is called supervised learning and the other is unsupervised. You can think of supervised learning as they way you read a children's book to your son or daughter and point out everything.

Here's a bird, here's a dog, there's another dog. By pointing things out, a child can eventually understand 'oh that's a dog' because you told me 15 times that that was a dog. So that's supervised learning. It's really pattern recognition. And that's all we know how to do today.

The other, the unsupervised learning, is the way most people will learn in the future. You have this model of how the world works in your head and you're refining it to predict what you think is going to happen in the future. Using that to inform what your actions are and you kind of have some model: Okay, I am going to take some actions and I expect this to happen in the world based on my action. AI will help us with this.

Döpfner: Can you understand the concerns that business magnate Elon Musk has expressed in that context? He seriously fears that artificial intelligence could one day dominate and take over the human brain, that the machine would be stronger than men. You think that is a valid fear or do you think it's hysterical?

Zuckerberg: I think it is more hysterical.

Döpfner: How can we make sure that computers and robots are serving people and not the other way around?

Zuckerberg: I think that the default is that all the machines that we build serve humans so unless we really mess something up I think it should stay that way.

Döpfner: But in chess, Garry Kasparov was beaten by the computer Big Blue in the end. So there may be more and more situations where a computer is simply smarter than a human brain.

Zuckerberg: Yes, but in that case people built that machine to do something better than a human can. There are many machines throughout history that were built to do something better than a human can. I think this is an area where people overestimate what is possible with AI.

Just because you can build a machine that is better than a person at something doesn't mean that it is going to have the ability to learn new domains or connect different types of information or context to do superhuman things. This is critically important to appreciate.

Döpfner: So this is science fiction fantasy and is not going to happen in real life and we don't need to worry about the safety of human intelligence?

Zuckerberg: I think that along the way, we will also figure out how to make it safe. The dialogue today kind of reminds me of someone in the 1800s sitting around and saying: one day we might have planes and they may crash. Nonetheless, people developed planes first and then took care of flight safety. If people were focused on safety first, no one would ever have built a plane.

This fearful thinking might be standing in the way of real progress. Because if you recognize that self-driving cars are going to prevent car accidents, AI will be responsible for reducing one of the leading causes of death in the world. Similarly, AI systems will enable doctors to diagnose diseases and treat people better, so blocking that progress is probably one of the worst things you can do for making the world better.

READ MORE ARTICLES ON


Advertisement

Advertisement