+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather

Oct 30, 2023, 21:28 IST
Business Insider
AI godfather Yann LeCun has fired shots at notable AI leaders.Kevin Dietsch/Getty Images
  • An AI godfather has had it with the doomsdayers.
  • Meta's Yann LeCun thinks tech bosses' bleak comments on AI risks could do more harm than good.
Advertisement

AI godfather Yann LeCun wants us to forget some of the more far-fetched doomsday scenarios.

He sees a different, real threat on the horizon: the rise of power hungry one-percenters who rob everyone else of AI's riches.

Over the weekend, Meta's chief AI scientist accused some of the most prominent founders in AI of "fear-mongering" and "massive corporate lobbying" to serve their own interests.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

He named OpenAI's Sam Altman, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei in a lengthy weekend post on X.

"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment," LeCun wrote, referring to these founders' role in shaping regulatory conversations about AI safety. "They are the ones who are attempting to perform a regulatory capture of the AI industry."

Advertisement

He added that if these efforts succeed, the outcome would be a "catastrophe" because "a small number of companies will control AI."

That's significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.

Altman, Hassabis, and Amodei did not immediately respond to Insider's request for comment.

LeCun's comments came in response to a post on X from physicist Max Tegmark, who suggested that LeCun wasn't taking the AI doomsday arguments seriously enough.

"Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone," Tegmark wrote, referring to the UK's upcoming global AI safety summit.

LeCun says founder fretting is just lobbying

Since the launch of ChatGPT, AI's power players have become major public figures.

Advertisement

But, LeCun said, founders such as Altman and Hassabis have spent a lot of time drumming up fear about the very technology they're selling.

In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.

The letter cited "profound risks to society and humanity" posed by hypothetical AI systems. Tegmark, one of the letter's signatories, has described AI development as "a suicide race."

LeCun and others say these kinds of headline-grabbing warnings are just about cementing power and skating over the real, imminent risks of AI.

Those risks include worker exploitation and data theft that generates profit for "a handful of entities," according to the Distributed AI Research Institute (DAIR).

Advertisement

The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape.

LeCun has described how people are "hyperventilating about AI risk" because they have fallen for what he describes as the myth of the "hard take-off." This is the idea that "the minute you turn on a super-intelligent system, humanity is doomed."

But imminent doom is unlikely, he argues, because every new technology in fact goes through a very ordered development process before wider release.

So the area to focus on, is in fact, how AI is developed right now. And for LeCun, the real danger is that the development of AI is locked into private, for-profit entities who never release their findings, while AI's open-source community gets obliterated.

His consequent worry is that regulators let it happen because they're distracted by killer robot arguments.

Advertisement

Leaders like LeCun have championed open-source developers as their work on tools that rival, say, OpenAI's ChatpGPT, brings a new level of transparency to AI development.

LeCun's employer, Meta, made its own large language model that competes with GPT, LLaMa 2, (somewhat) open source. The idea is that the broader tech community can look under the hood of the model. No other big tech company has done a similar open-source release, though OpenAI is rumored to be thinking about it.

For LeCun, keeping AI development closed is a real reason for alarm.

"The alternative, which will inevitably happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet," he wrote.

"What does that mean for democracy? What does that mean for cultural diversity?"

Advertisement
Next Article