+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

There's only a 5% chance of AI making humans extinct, a study featuring 2,700 AI researchers has found

Jan 4, 2024, 19:44 IST
Business Insider
OpenAI, led by Sam Altman, is at the center of the debate about existential threats from AI.Justin Sullivan/Getty Images
  • Thousands of AI researchers have shared their views on the future of AI in a new study.
  • Almost 58% of 2,778 researchers said they thought the extinction threat from AI was about 5%.
Advertisement

Over the last year, we've heard a lot about the risk of AI destroying humanity.

Industry leaders and AI heavyweights said the rapid development of the technology could have catastrophic consequences for the world.

But, while most AI researchers recognize the possibility of existential threats, they don't think the dramatic outcomes are very likely, the largest survey of AI researchers has found.

In the survey, the 2,778 participants were asked questions about the social consequences of AI developments and possible timelines for the future of the tech.

Almost 58% of those surveyed said they considered the threat of human extinction or other extremely bad outcomes brought about by the tech to be around 5%.

Advertisement

The study was published by researchers and academics at universities around the world, including Oxford and Bonn in Germany.

One of the paper's authors, Katja Grace, told The New Scientist the survey was a signal that most AI researchers "don't find it strongly implausible that advanced AI destroys humanity." She added there was a "general belief in a non-minuscule risk."

Whether AI poses a significant threat to humanity has been an intense debate in Silicon Valley in the last few months.

Several AI experts, including Google Brain cofounder Andrew Ng and AI godfather Yann LeCun, have dismissed some of the bigger doomsday scenarios. LeCun has even accused tech leaders such as Sam Altman of having ulterior motives for hyping AI fears.

In October, LeCun said some of the leading AI companies were trying to provide a "regulatory capture" of the industry by pushing harsh regulation.

Advertisement
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article