+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

OpenAI just dissolved its team dedicated to managing AI risks, like the possibility of it 'going rogue'

May 18, 2024, 01:46 IST
Business Insider
Ilya Sutskever played a key role in ousting Sam Altman last year, and recently announced he was leaving the company.Jack Guez/Getty
  • OpenAI's Superalignment team was formed in July 2023 to mitigate AI risks, like "rogue" behavior.
  • OpenAI has reportedly disbanded its Superalignment team after its co-leaders resigned.
Advertisement

In the same week that OpenAI launched GPT-4o, its most human-like AI yet, the company dissolved its Superalignment team, Wired first reported.

OpenAI created its Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team was dedicated to mitigating AI risks, such as the possibility of it "going rogue."

The team reportedly disbanded days after its leaders, Ilya Sutskever and Jan Leike, announced their resignations earlier this week. Sutskever said in his post that he felt "confident that OpenAI will build AGI that is both safe and beneficial" under the current leadership.

He also added that he was "excited for what comes next," which he described as a "project that is very personally meaningful" to him. The former OpenAI executive hasn't elaborated on it but said he will share details in time.

Sutskever, a cofounder and former chief scientist at OpenAI, made headlines when he announced his departure. The executive played a role in the ousting of CEO Sam Altman in November. Despite later expressing regret for contributing to Altman's removal, Sutskever's future at OpenAI had been in question since Altman's reinstatement.

Advertisement

Following Sutskever's announcement, Leike posted on X, formerly Twitter, that he was also leaving OpenAI. The former executive published a series of posts on Friday explaining his departure, which he said came after disagreements about the company's core priorities for "quite some time."

Leike said his team has been "sailing against the wind" and struggling to get compute for its research. The mission of the Superalignment team involved using 20% of OpenAI's computing power over the next four years to "build a roughly human-level automated alignment researcher," according to OpenAI's announcement of the team last July.

Leike added "OpenAI must become a safety-first AGI company." He said building generative AI is "an inherently dangerous endeavor" and OpenAI was more concerned with releasing "shiny products" than safety.

Jan Leike did not respond to a request for comment.

The Superalignment team's objective was to "solve the core technical challenges of superintelligence alignment in four years," a goal that the company admitted was "incredibly ambitious." They also added they weren't guaranteed to succeed.

Advertisement

Some of the risks the team worked on included "misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance." The company said in its post that the new team's work was in addition to existing work at OpenAI aimed at improving the safety of current models, like ChatGPT.

Some of the team's remaining teammembers have been rolled into other OpenAI teams, Wired reported.

OpenAI didn't respond to a request for comment.


You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article