scorecard
  1. Home
  2. tech
  3. news
  4. Ex-OpenAI employee speaks out about why he was fired: 'I ruffled some feathers'

Ex-OpenAI employee speaks out about why he was fired: 'I ruffled some feathers'

Ana Altchek   

Ex-OpenAI employee speaks out about why he was fired: 'I ruffled some feathers'
Tech2 min read
  • Leopold Aschenbrenner spoke about his firing from OpenAI's superalignment team in a podcast.
  • He said HR warned him after he shared a memo about OpenAI's security with two board members.

A former OpenAI researcher opened up about how he "ruffled some feathers" by writing and sharing some documents related to safety at the company and was eventually fired.

Leopold Aschenbrenner, whose LinkedIn says he graduated from Columbia University at 19, worked on OpenAI's superalignment team before he was fired in April over an alleged leak, The Information reported at the time. He spoke out about the experience in a recent interview with the podcaster Dwarkesh Patel, which was released Tuesday.

Aschenbrenner said he wrote a memo after a "major security incident," which he didn't specify in the interview, and shared it with a couple of OpenAI board members. In the memo, he wrote that the company's security was "egregiously insufficient" in protecting against the theft of "key algorithmic secrets from foreign actors," Aschenbrenner said. The researcher had previously shared the memo with others at OpenAI, "who mostly said it was helpful," he added.

Human resources later gave him a warning about the memo, Aschenbrenner said, telling him it was "racist" and "unconstructive" to worry about Chinese Communist Party espionage. An OpenAI lawyer later asked him about his views on AI and AGI and whether Aschenbrenner and the superalignment team were "loyal to the company," as Aschenbrenner put it.

Aschenbrenner said the company then went through his OpenAI digital artifacts.

He was fired shortly after, he said, with the company alleging he had leaked confidential information and wasn't forthcoming in its investigation. The company also referenced his prior warning from HR after he shared the memo with the board members.

Aschenbrenner said the leak in question referred to a "brainstorming document on preparedness, on safety and security measures" needed for artificial general intelligence that he shared with three external researchers for feedback. He said that he'd reviewed the document before sharing it for any sensitive information and that it was "totally normal" at the company to share this kind of information for feedback.

Aschenbrenner said OpenAI deemed a line about "planning for AGI by 2027 to 2028 and not setting timelines for preparedness" as confidential. He said he wrote the document a couple of months after the superalignment team was announced, which referenced a four-year planning horizon.

In its announcement of the superalignment team posted in July last year, OpenAI said its goal was to "solve the core technical challenges of superintelligence alignment in four years."

"I didn't think that planning horizon was sensitive," Aschenbrenner said in the interview. "You know it's the sort of thing Sam says publicly all the time," he said, referring to CEO Sam Altman.

An OpenAI spokesperson told Business Insider the concerns Aschenbrenner raised internally and to its Board of Directors "did not lead to his separation."

"While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work," the OpenAI spokesperson said.

Aschenbrenner is one of several former employees who have recently spoken out about safety concerns at OpenAI. Most recently, a group of nine current and former OpenAI employees signed a letter calling for more transparency in AI companies and protection for those who express concern about the technology.




Advertisement