+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Charlie Munger's last big call was probably right: AI is overhyped

Nov 29, 2023, 22:58 IST
Insider
Charlie Munger thought AI was overhyped. Nati Harnik/AP Photo
  • Charlie Munger, who passed away Tuesday at the age of 99, had a last big call on AI: it's overhyped.
  • The legendary investor said he was "personally skeptical" about the hype.
Advertisement

Charlie Munger got the big calls right.

Decisions to back Apple, Costco, and Coca-Cola with his investing partner Warren Buffett cemented Munger's reputation as a legendary investor and fuelled Berkshire Hathaway's cash pile to a record $157 billion.

Munger, who died at the age of 99 on Tuesday, was also skeptical about faddy boom sectors such as crypto and bitcoin, which he described as "rat poison."

Speaking in May, the Oracle of Omaha said he was "personally skeptical of some of the hype that has gone into artificial intelligence."

This was as OpenAI, creator of ChatGPT, raised billions of dollars from Microsoft and venture capitalists plowed funds into months-old AI startups at high valuations.

Advertisement

It was counterintuitive, but his last big call could be right: AI is overhyped.

Here's why.

1. It isn't profitable yet

The original source of 2023's AI hype – ChatGPT-developer OpenAI – looks pretty messy right now after one of the most dramatic boardroom showdowns in Silicon Valley history.

Over five days, OpenAI's board ousted CEO Sam Altman, rattled through two other CEO candidates, and then reinstated Altman. An accompanying sideshow was Altman briefly accepting a job offer at OpenAI's biggest investor, Microsoft, and almost taking most of the company with him.

It doesn't feel like a company that should be valued at $86 billion. Private and public valuations are not directly comparable but, for reference, companies including Ferrari, British American Tobacco, and Dell all have lower market capitalizations.

Advertisement

That OpenAI has a slightly byzantine corporate structure doesn't help either. The nonprofit parent firm governs a capped-profit company. One primarily aims to accelerate the development of AI, and the other primarily wants to minimize its harms. That raises questions, from an investor perspective, about whether there's an inbuilt cap on its growth.

It's not clear anyone else is making much money directly from AI.

DeepMind, the AI lab bought by Google in 2014, has struggled to strike a balance between operating as a research lab versus a commercial entity, and reported losses for years. Google had to write off $1.3 billion in debt for DeepMind in 2019. Google has now rolled the lab in with another AI division, Google Brain, and no longer explicitly breaks out financials.

2. A lot of AI startups are VC-funded

Another problem from the Munger perspective: lots of OpenAI rivals, such as Stability AI, Inflection AI, and Anthropic, look to venture capital to fund their growth.

Munger was skeptical about VCs' ability to make good investments, and suggested they tread a fine line between investment and gambling.

Advertisement

There's also a broader industry question of whether money that needs to be returned quickly is the right growth and funding mechanism for AI firms, which require deep technical talent, research, and compute.

3. AI has still to prove out its case

Finally, there's the big question of whether the technology will deliver on its big promises.

AI is touted as a technology with the potential to be as monumental as the internet. OpenAI's Altman and other AI CEOs talk very publicly about AI as smart as humans being on the horizon.

But such claims are worth taking with a grain of salt.

Generative AI generally depends on the work of others – an issue that threatens to come to a head through lawsuits on copyright filed by authors such as Michael Chabon. Large language models are still prone to hallucinations too, making them far from reliable. Meanwhile, the ability of the technology to generalize beyond its given training data set is up for debate, as a recent paper from Google researchers suggested.

Advertisement

Munger made investment mistakes. But he may be right for now: human intelligence works just fine.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article