+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Ex-OpenAI employee speaks out about why he was fired: 'I ruffled some feathers'

Jun 6, 2024, 18:34 IST
Business Insider
Former OpenAI former employee Leopold Aschenbrenner spoke out about his firing.Jaap Arriens/Getty
  • Leopold Aschenbrenner spoke about his firing from OpenAI's superalignment team in a podcast.
  • He said HR warned him after he shared a memo about OpenAI's security with two board members.
Advertisement

A former OpenAI researcher opened up about how he "ruffled some feathers" by writing and sharing some documents related to safety at the company and was eventually fired.

Leopold Aschenbrenner, whose LinkedIn says he graduated from Columbia University at 19, worked on OpenAI's superalignment team before he was fired in April over an alleged leak, The Information reported at the time. He spoke out about the experience in a recent interview with the podcaster Dwarkesh Patel, which was released Tuesday.

Aschenbrenner said he wrote a memo after a "major security incident," which he didn't specify in the interview, and shared it with a couple of OpenAI board members. In the memo, he wrote that the company's security was "egregiously insufficient" in protecting against the theft of "key algorithmic secrets from foreign actors," Aschenbrenner said. The researcher had previously shared the memo with others at OpenAI, "who mostly said it was helpful," he added.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Human resources later gave him a warning about the memo, Aschenbrenner said, telling him it was "racist" and "unconstructive" to worry about Chinese Communist Party espionage. An OpenAI lawyer later asked him about his views on AI and AGI and whether Aschenbrenner and the superalignment team were "loyal to the company," as Aschenbrenner put it.

Aschenbrenner said the company then went through his OpenAI digital artifacts.

Advertisement

He was fired shortly after, he said, with the company alleging he had leaked confidential information and wasn't forthcoming in its investigation. The company also referenced his prior warning from HR after he shared the memo with the board members.

Aschenbrenner said the leak in question referred to a "brainstorming document on preparedness, on safety and security measures" needed for artificial general intelligence that he shared with three external researchers for feedback. He said that he'd reviewed the document before sharing it for any sensitive information and that it was "totally normal" at the company to share this kind of information for feedback.

Aschenbrenner said OpenAI deemed a line about "planning for AGI by 2027 to 2028 and not setting timelines for preparedness" as confidential. He said he wrote the document a couple of months after the superalignment team was announced, which referenced a four-year planning horizon.

In its announcement of the superalignment team posted in July last year, OpenAI said its goal was to "solve the core technical challenges of superintelligence alignment in four years."

"I didn't think that planning horizon was sensitive," Aschenbrenner said in the interview. "You know it's the sort of thing Sam says publicly all the time," he said, referring to CEO Sam Altman.

Advertisement

An OpenAI spokesperson told Business Insider the concerns Aschenbrenner raised internally and to its Board of Directors "did not lead to his separation."

"While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work," the OpenAI spokesperson said.

Aschenbrenner is one of several former employees who have recently spoken out about safety concerns at OpenAI. Most recently, a group of nine current and former OpenAI employees signed a letter calling for more transparency in AI companies and protection for those who express concern about the technology.

Next Article