+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Another safety researcher is leaving OpenAI

Oct 25, 2024, 04:08 IST
Business Insider
NurPhoto/Getty
  • Miles Brundage, who advises OpenAI leadership on safety and policy, announced his departure.
  • He said that he's leaving the company to have more independence and freedom to publish.
Advertisement

Miles Brundage, a senior policy advisor and head of the AGI Readiness team at OpenAI, is leaving the company. He announced the decision today in a post on X, which was accompanied by a Substack article explaining the decision. The AGI Readiness team that he oversaw will be disbanded, with its various members distributed among other parts of the company.

Brundage is just the latest high-profile safety researcher to leave OpenAI. In May, the company dissolved its Superalignment Team, which focused on the risks of artificial superintelligence, after the departure of its two leaders Jan Leike and Ilya Sutskever. The company has also seen the departure in recent months of Chief Technology Officer Mira Murati, Chief Research Officer Bob McGrew, and VP of Research Barret Zoph.

OpenAI did not respond to a request for comment.

For the past six years Brundage has advised the OpenAI executives and board members about how best to prepare for the rise of artificial intelligence that rivals that of humans and which many experts agree could fundamentally transform society.

He's been responsible for some of OpenAI's biggest innovations in safety research, including instituting external red teaming, which involves outside experts looking for potential problems in OpenAI products.

Advertisement

Brundage said that he's leaving the company to have more independence and freedom to publish. He referenced disagreements he's had with OpenAI about limitations on which research he was allowed to publish and said that "the constraints have become too much."

He also said that working within OpenAI has biased his research and made it difficult to be impartial about the future of AI policy. In his post on X, Brundage referenced a prevailing sentiment within OpenAI that "speaking up has big costs and that only some people are able to do so."

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article