scorecard
  1. Home
  2. tech
  3. news
  4. OpenAI cofounder says artificial general intelligence is coming fast — and needs some 'reasonable limits'

OpenAI cofounder says artificial general intelligence is coming fast and needs some 'reasonable limits'

Beatrice Nolan,Jyoti Mann   

OpenAI cofounder says artificial general intelligence is coming fast — and needs some 'reasonable limits'
Tech2 min read
  • OpenAI's John Schulman says artificial general intelligence could be two to three years away.
  • Schulman emphasizes the need for tech companies to cooperate for the safe development of AGI.

The age of AGI is coming and could be just a few years away, according to OpenAI cofounder John Schulman.

Speaking on a podcast with Dwarkesh Patel, Schulman predicted that artificial general intelligence could be achieved in "two or three years."

He added that tech companies needed to be ready to cooperate to ensure the technology was developed safely.

"Everyone needs to agree on some reasonable limits to deployment or to further training, for this to work. Otherwise, you have the race dynamics where everyone's trying to stay ahead, and that might require compromising on safety."

Schulman also said there would need to be "some coordination among the larger entities that are doing this kind of training."

AGI is a somewhat contested term, but is generally understood to refer to AI systems that have the ability to achieve complex human capabilities such as common sense and reasoning.

Experts have long warned that this level of advanced AI represents various existential threats to humanity, including the risk of an AI takeover or humans becoming obsolete in the workforce.

Tech companies are racing to develop this futuristic technology. OpenAI, where Schulman still works, is one of the frontrunners to achieve AGI first.

Schulman told Patel's podcast: "If AGI came way sooner than expected we would definitely want to be careful about it. We might want to slow down a little bit on training and deployment until we're pretty sure we know we can deal with it safely."

He added companies needed to be prepared to "pause either further training, or pause deployment, or avoiding certain types of training that we think might be riskier. So just setting up some reasonable rules for what everyone should do to having everyone somewhat limit these things."

Some industry experts called for a similar pause after OpenAI released its GPT-4 model. In March last year, Elon Musk was among multiple experts who signed a letter raising concerns about the development of AI. The signatories called for a six-month pause on the training of AI systems more powerful than GPT-4.

OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours.

Last week, an OpenAI spokesperson, Kayla Wood, told The Washington Post that Schulman has taken over leading its safety research efforts.

The changes were made after Jan Leike, who led its Superalignment team, resigned last week and later accused the company of prioritizing "shiny products" over safety.

The team has since been dissolved following several departures of its members, including chief scientist Ilya Sutskever. A spokesperson for OpenAI told The Information that the remaining staffers were now part of its core research team.

Schulman's comments come amid protest movements calling for a pause on training AI models. Groups such as Pause AI fear that if firms like OpenAI create superintelligent AI models, they could pose existential risks to humanity.

Pause AI protesters held a demonstration outside OpenAI's headquarters last week as it announced its GPT-4o model.


Advertisement

Advertisement