+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Apple cofounder Steve Wozniak dismisses AI concerns raised by the likes of Stephen Hawking and Nick Bostrom

Oct 9, 2016, 13:16 IST

Festival of Marketing

PayPal billionaire Elon Musk, Microsoft cofounder Bill Gates, and renowned scientist Stephen Hawking have called out artificial intelligence (AI) as one of the biggest threats to humanity's very existence.

Advertisement

But Apple cofounder Steve Wozniak told Business Insider in an interview this week that he's not concerned about AI. At least, not anymore. He said he reversed his thinking on AI for several reasons.

"One being that Moore's Law isn't going to make those machines smart enough to think really the way a human does," said Wozniak. "Another is when machines can out think humans they can't be as intuitive and say what will I do next and what is an approach that might get me there. They can't figure out those sorts of things.

"We aren't talking about artificial intelligence actually getting to that point. [At the moment] It's sort of like it magically might arise on its own. These machines might become independent thinkers. But if they do, they're going to be partners of humans over all other species just forever."

University of Oxford philosopher Nick Bostrom.SRF

Wozniak's comments contrast with what Swedish philosopher Nick Bostrom said at the IP Expo tech conference in London on the same day.

Advertisement

The academic believes that machines will achieve human-level artificial intelligence in the coming decades, before quickly going on to acquire what he describes as "superintelligence," which is also the title of a book he authored.

Bostrom, who heads the Future of Humanity Institute at the University of Oxford, thinks that humans could one day become slaves to a superior race of artificially intelligent machines. This doomsday scenario can be avoided, he says, if self-thinking machines are developed from the very beginning in a way that ensures they're going to act in the interest of humans.

Commenting on how this can be achieved, Bostrom said this doesn't mean we have to "tie its hands behind its back and hold a big stick over it in the hope we can force it to our way." Instead, he thinks developers and tech companies must "build it [AI] in such a way that it's on our side and wants the same things as we do."

NOW WATCH: Here's why the time is always 9:41 in Apple product photos

Please enable Javascript to watch this video
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article