+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Eric Schmidt says there's 'no evidence' AI scaling laws are stopping — but they will eventually

Nov 15, 2024, 21:08 IST
Business Insider
Former Google CEO Eric Schmidt thinks AI models will continue showing notable improvements over the next five years.Shahar Azran via Getty Images
  • Ex-Google CEO Eric Schmidt says there's "no evidence" of an AI slowdown.
  • OpenAI and Google have been facing a performance plateau with their latest models, per reports.
Advertisement

Eric Schmidt says there's "no evidence" artificial intelligence scaling laws are stopping as some in Silicon Valley worry about an AI slowdown.

"These large models are scaling with an ability that is unprecedented," the former Google CEO said in an episode of "The Diary of A CEO" podcast that went live on Thursday.

He said there will be "two or three more turns of the crank of these large models" over the next five years, referring to improvements in large language models.

"There's no evidence that the scaling laws, as they're called, have begun to stop. They will eventually stop, but we're not there yet," he added.

His comments come amid a debate among Silicon Valley leaders over the feasibility of developing increasingly advanced models. AI scaling laws are the theoretical rules that broadly state models will continue to improve with more training data and greater computing power. However, recent reports have said some of the biggest AI companies are struggling to improve models at the same rate as before.

Advertisement

A report from The Information earlier this month said OpenAI's next flagship model, Orion, had shown only a moderate improvement over ChatGPT-4 and a smaller leap compared to advances between versions that came before.

While Orion's training is not yet complete, OpenAI has reportedly reverted to additional measures to boost performance, such as baking in post-training improvements based on human feedback.

Days later, a Bloomberg report also said Google and Anthropic were seeing similar diminishing returns from their costly efforts to develop more advanced models. At Google, the coming version of its AI model Gemini is failing to live up to internal expectations, while the timetable for Antripic's new Claude model has slipped, the report said.

While some in the industry, such as New York University professor emeritus Gary Marcus, have taken the reports as proof that LLMs have reached a point of diminishing returns, others have argued that AI models aren't reaching a performance plateau.

OpenAI CEO Sam Altman appeared to reference the debate on Thursday with a post saying, "There is no wall."

Advertisement

Representatives for OpenAI, Google, and Anthropic did not immediately respond to a request fro comment from Business Insider, made outside normal working hours.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article