+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

An AI researcher who has been warning about the technology for over 20 years says we should 'shut it all down,' and issue an 'indefinite and worldwide' ban

Mar 31, 2023, 19:39 IST
Business Insider
An AI researcher warned that "literally everyone on Earth will die," if AI development isn't shut down.iLexx/Getty Images
  • One AI researcher who has been warning about the tech for over 20 years said to "shut it all down."
  • Eliezer Yudkowsky said the open letter calling for a pause on AI development doesn't go far enough.
Advertisement

An AI researcher who has warned about the dangers of the technology since the early 2000s said we should "shut it all down," in an alarming op-ed published by Time on Wednesday.

Eliezer Yudkowsky, a researcher and author who has been working on Artificial General Intelligence since 2001, wrote the article in response to an open letter from many big names in the tech world, which called for a moratorium on AI development for six months.

The letter, signed by 1,125 people including Elon Musk and Apple's co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI's recently launched GPT-4.

Yudkowsy's article, titled "Pausing AI Developments Isn't Enough. We Need to Shut it All Down," said he refrained from signing the letter because it understated the "seriousness of the situation," and asked for "too little to solve it."

He wrote: "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."

Advertisement

He explained that AI "does not care for us nor for sentient life in general," and we're far from instilling those kinds of principles in the tech at present.

Yudkowsky instead suggested a ban that is "indefinite and worldwide" with no exceptions for governments or militaries.

"If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike," Yudkowsky said.

Yudkowsky has for many years been issuing bombastic warnings about the possibly catastrophic consequences of AI. Earlier in March he was described by Bloomberg as an "AI Doomer," with author Ellen Huet noting that he has been warning about the possibility of an "AI apocalypse" for a long time.

Open AI co-founder and CEO Sam Altman even tweeted that Yudkowksy has "done more to accelerate AGI than anyone else," and deserves "the Nobel peace prize," for his work in what Huet theorized is a jab at the researcher that his warnings about the tech have only accelerated its development.

Advertisement

Since OpenAI launched its chatbot ChatGPT in November and it became the fastest-growing consumer app in internet history, Google, Microsoft, and other tech giants have been competing to launch their own artificial intelligence products.

Henry Ajder, an AI expert and presenter who sits on the European Advisory Council for Meta's Reality Labs, previously told Insider that tech firms are locked in a "competitive arms race environment" in an effort to be seen as "first movers," which may result in concerns around ethics and safety in AI being overlooked.

Even Altman has acknowledged fears around AI, saying on a podcast last week that "it'd be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid."

He added, however, that OpenAI is taking steps to address kinks and issues with its tech, saying: "We will minimize the bad and maximize the good."

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article