4 things experts say could happen with AI in 2024 — and why it could be bad news for OpenAI
- From OpenAI to Gemini, AI has taken over Silicon Valley and the world in 2023.
- AI experts gave Business Insider their predictions for what 2024 has in store for the technology.
The AI industry has had a wild 2023.
A year that began with ChatGPT becoming the fastest-growing app of all time has ended with the arrival of Gemini — Google's answer to OpenAI's phenomenally successful AI model. Along the way, AI has transformed almost every corner of the tech industry and sparked fears of existential doom.
Experts told Business Insider that things are unlikely to slow down in 2024, with AI likely to become an even larger part of our lives next year and everyone from Google to Elon Musk gunning for OpenAI's crown.
Here are their big predictions for the next 12 months:
1. AI will be everywhere
Google ended the year with a bang with the launch of Gemini, an AI model that it says can match OpenAI's GPT-4, and is planning on rolling out more advanced versions in the next few months.
Not to be outdone, OpenAI is planning to launch a GPT store in early 2024, which will allow users to build and sell their own versions of ChatGPT.
It's part of a trend that AI experts say will see the technology become a much bigger part of our daily lives, as tech companies integrate it into as many of their products as possible and AI adoption becomes widespread.
"I think 2024 will be the year where we actually start seeing widespread adoption of all these AI tools," Charles Higgins, cofounder of AI-training startup Tromero and AI safety Ph.D. candidate, told BI.
"With a model like Gemini, the important part is accessibility. It's already integrated into products you are used to and use. So using an AI suite of tools is going to become the norm rather than the exception," he said.
Another trend to watch in 2024 is open-source models. Unlike closed systems like GPT-4 and Gemini, these models are freely available for anyone to use and modify.
Meta has bet big on this form of AI, making its Llama 2 model widely accessible and starting an "open science" alliance with other tech companies such as IBM.
However, the sheer cost of training AI models means that truly open alternatives to AI developed by big tech companies are unlikely to become popular anytime soon.
"Training models is really, really expensive," Sophia Kalanovska, a fellow Tromero cofounder and Ph.D. candidate, told BI.
"So the open source community is still reliant on big companies like Meta to put their models out there, because only they have the resources to train them," she said.
2. OpenAI will feel the heat
OpenAI has ridden the ChatGPT wave since it launched to incredible success in late 2022 — but in recent weeks, there have been signs that the chatbot has been having some issues.
Users have complained that ChatGPT's performance has deteriorated and it is even refusing to carry out some tasks, with OpenAI saying it is looking into reports that the chatbot is getting "lazier."
"I do believe that ChatGPT has been quite bad over the past three weeks. There's been pretty constant network errors and the responses have become much shorter," said Kalanovska.
The chatbot's strange behavior is another illustration of how much is still unknown about how large language models work — but it has also added to the pressure on OpenAI after weeks of chaos at the company.
The dramatic departure and reappointment of Sam Altman as CEO has left the company's lead at the head of the AI arms race looking unstable, with startup customers jumping ship to competitors and Microsoft unveiling its own AI systems to lessen its reliance on OpenAI.
With Gemini and other rival models such as Elon Musk's Grok joining the fray, next year could be even more difficult for OpenAI as the AI industry becomes increasingly crowded.
"Whatever the drama was about, there's a crack in their armor right now," said Higgins. "It rocked the boat, and I think the other big players are certainly looking to step up and take advantage."
3. AI companies face looming copyright battle
Right now, there is a huge legal question mark hovering over the entire AI industry.
Cases such as Getty Images' lawsuit against Stability AI, which is due to go to trial in the UK next year, and the lawsuit filed by comedian Sarah Silverman and other authors against OpenAI in the US, all revolve around a simple unanswered question: Is it legal to train AI models on data that includes copyrighted content?
"It's an open question in most countries," Dr Andres Guadamuz, a professor of intellectual property law at the University of Sussex, told BI.
"I think that in 2024 we're going to get potentially one or two decisions that will help clarify things — but this is going to take a long time, probably between four to five years to properly settle," he said.
This legal wrangling poses an existential threat to the AI industry. Major tech companies have admitted that having to pay for the enormous amounts of copyrighted data used to train AI models would likely make it impossible to train models as large and complex as GPT-4.
"If the cases were decided tomorrow and all the AI companies lost, they would face serious consequences because they would probably have to pay out huge sums," Guadamuz said.
However, he added that while a series of crippling legal defeats for AI companies in the US and UK would set the AI revolution back considerably, it would likely not stop it entirely.
AI development, Guadamuz said, would probably simply move to countries with more relaxed rules.
4. Regulation urgently needed
US politicians famously failed to introduce new laws curbing the influence of social media — and now history seems to be in danger of repeating itself with AI.
The EU finally agreed to a set of controls for generative AI tools this month after torturous negotiations, but despite hosting a series of congressional hearings on AI, the US does not seem any closer to regulating the frontier technology.
Experts say that must change in 2024, with AI already disrupting a wide variety of jobs and fears growing over the real-world impact of AI-generated content.
"2024 is the year in which the rubber will need to start meeting the road in AI regulation," said Vincent Conitzer, a professor of computer science at Carnegie Mellon University.
"There are many regulatory initiatives that seem sensible enough at a high level, but the actual implementation details matter a lot and are still lacking and untested.
"Figuring out those details is challenging, because regulation comes together slowly and AI is now an extremely fast-moving target," he said.
Guadamuz agreed, adding that regulators would likely need to step in now rather than wait for difficult questions surrounding AI to be decided by courts.
"The law is always going to lag behind the technology. We therefore need regulation to step in, rather than waiting for the case law to be decided," he said.