+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

DeepMind has hired AI safety experts to protect us from dangerous machines

Nov 25, 2016, 19:02 IST

Advertisement
Google DeepMind CEO Demis HassabisYouTube/Royal Television Society

DeepMind has made a number of hires as part of an effort to mitigate the chance of its artificial intelligence developing into something dangerous, according to LinkedIn and other sources.

The London-based AI lab, which was acquired by Google in 2014 for £400 million, is building computer systems that can learn and think for themselves.

So far the company's algorithms have been used to defeat humans at complex board games like Go and helped Google to cut its huge electricity bill. But DeepMind doesn't plan to stop there; ultimately it wants to "solve intelligence" and use it to "make the world a better place."

In a bid to reduce the chance of creating dangerous artificial intelligence, DeepMind has hired Viktoriya Krakovna, Jan Leike, and Pedro Ortega into its AI safety group. It's currently unclear when this group was formed.

Some of the world's smartest minds, including physicist Stephen Hawking and Tesla founder Elon Musk, have warned that "superintelligent" machines - described in a book called "Superintelligence" by Oxford University philosopher Nick Bostrom - could end up being one of the greatest threats to humanity. They're concerned that they could outsmart humans within a matter of decades and decide that we're no longer necessary.

Advertisement

Krakovna, who joins DeepMind as a research scientist, holds a PhD in statistics from Harvard University and she cofounded the Future of Life Institute in Boston with MIT cosmologist Max Tegmark and Skype cofounder Jaan Tallinn.

The institute, which counts Hawking and Musk as board advisors, was created to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI).

While at DeepMind, the former Google engineer will carry out technical research on AI safety, according to her LinkedIn profile.

Leike has also joined DeepMind as a research scientist, according to his personal website.

Advertisement

In addition to his role at DeepMind Leike is also a research associate at Oxford University's Future of Humanity Institute, which is lead by Bostrom.

On his website, Leike writes: "My research aims at making machine learning robust and beneficial. I work on problems in reinforcement learning orthogonal to capability: How do we design or learn a good objective function? How can we design agents such that they are incentivised to act in our best interests? How can we avoid degenerate solutions to the objective function?"

Ortega, also a research scientist at DeepMind, holds a PhD in machine learning from Cambridge University. According to a short bio on his personal website: "His work includes the application of information-theoretic and statistical mechanical ideas to sequential decision-making."

DeepMind did not immediately respond to Business Insider's request for comment.

NOW WATCH: These size comparisons show the true scale of enormous things

Please enable Javascript to watch this video
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article