scorecard
  1. Home
  2. tech
  3. news
  4. OpenAI is reportedly nearing AI systems that can reason. Here's why that could be a cause for concern.

OpenAI is reportedly nearing AI systems that can reason. Here's why that could be a cause for concern.

Beatrice Nolan   

OpenAI is reportedly nearing AI systems that can reason. Here's why that could be a cause for concern.
  • OpenAI has shared a new five-level scale to mark progress toward artificial general intelligence.
  • The company told employees it was nearing AI systems that could reason, Bloomberg reported.

OpenAI has a new scale to mark its progress toward artificial general intelligence, or AGI.

According to a Bloomberg report, the company behind ChatGPT shared the new five-level classification system with employees at an all-hands meeting on Tuesday.

The scale ranked AI systems by levels of intelligence, from chatbots at level one, to AI systems that could do the work of entire organizations at level five.

Execs reportedly told staffers they believed OpenAI was at level one, defined as AI with conversational language skills, but was nearing level two, identified as "reasoners" with human-level problem-solving.

Progress to the next level is a sign that OpenAI chief Sam Altman is inching closer to his stated ambition of creating AGI, or AI systems that can match or surpass human capabilities across a wide range of cognitive tasks.

It's a mission that has turned into a high-stakes race against competitors since the launch of ChatGPT, as billions of dollars of investment have poured into companies vying to reach the same goal first.

Altman has said he expects major progress toward AGI will be achieved by the end of the decade.

A big deal

John Burden, a research fellow at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, told Business Insider the jump from existing systems to those that could reason would be "very significant."

"If we do get some AI systems that can reason soon, I cannot understate how big of a deal that would be — we're talking about systems that would be able to come to conclusions that we don't like," he said.

Burden added developing AI systems to this level runs the risk of the machines "reasoning past us," something that could have consequences for the workforce.

"If these systems can reason as well as humans, they're probably going to be a lot cheaper than humans to keep employed," he said.

An OpenAI representative told Bloomberg the scale also included "Agent" and "Innovator" levels, which classified AI systems by their ability to take action and aid in invention.

However, the validity of the scale itself is also up for debate.

Just a mirage?

Burden said the tech industry still appeared to be hovering at level one, which covers the chatbots now available. He added that the jump from the second level to three and five was "essentially trivial."

"Whatever Sam Altman wants to say to generate hype, we're still just at level one," he said. "We've got AI systems that appear to do a tiny bit of reasoning, but it's not clear if it's just a mirage."

It's also unclear whether the top end of the scale is even possible.

"The top level of the scale, where an AI that can do the work of an organization, requires many other human skills beyond just reasoning," Hannah Kirk, an AI researcher at the University of Oxford, told BI.

"The ability to coordinate, not just reason, is incredibly important to move you up these levels," she said. "There's going to be many more elements of coordination, or more social intelligence aspects that are very important to moving up these levels beyond just cognitive intelligence."

Representatives for OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours.



Popular Right Now



Advertisement