+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

OpenAI starts training its 'next frontier model' and takes action on safety concerns

Jul 16, 2024, 18:44 IST
Business Insider
iStock; BI
  • OpenAI has set up a new safety committee to advise the board on critical decisions.
  • Sam Altman's company also said it had begun training a new flagship AI model.
Advertisement

OpenAI has set up a safety and security committee to make recommendations to the board on "critical safety and security decisions," it said.

The committee will be chaired by Bret Taylor and include his fellow directors Adam D'Angelo and Nicole Seligman, as well as CEO Sam Altman.

The company also said it had begun training a new flagship AI model to succeed GPT-4.

It said Tuesday in a blog post: "OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI."

OpenAI recently launched its updated GPT-4o model, which uses native audio inputs and outputs. When integrated into ChatGPT, the model allows users to have humanlike conversations with the bot, speaking with it and showing it things.

Advertisement

OpenAI declined to elaborate on its blog post.

The new committee comes amid increased scrutiny over OpenAI's commitment to artificial-intelligence safety. Industry experts and former employees have been asking difficult questions about how it's handling the risks posed by advanced AI.

Jan Leike and Ilya Sutskever, who led a group within OpenAI that focused on ensuring its systems aligned with human interests, resigned earlier this month.

After his departure, Leike criticized the company's commitment to AI safety in a lengthy X post. He accused OpenAI of putting "shiny products" ahead of safety and claimed the safety team was left "struggling for compute."

OpenAI said in the blog post that the new committee's first task was to "evaluate and further develop OpenAI's processes and safeguards over the next 90 days."

Advertisement

It said the committee would then give its recommendations to the full board and that OpenAI would then announce an update on the recommendations it had adopted.

OpenAI has been battling other recent controversies, including the use of tight nondisclosure agreements to silence departing employees and a public spat with Scarlett Johansson.
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article