scorecard
  1. Home
  2. tech
  3. news
  4. Ex-OpenAI exec calls out Sam Altman for choosing 'shiny products' over AI safety

Ex-OpenAI exec calls out Sam Altman for choosing 'shiny products' over AI safety

Paul Squire   

Ex-OpenAI exec calls out Sam Altman for choosing 'shiny products' over AI safety
Tech2 min read
  • A top OpenAI executive researching safety quit Tuesday night.
  • Jan Leike said he had reached a "breaking point."

A former top safety executive at OpenAI is laying it all out.

Jan Leike, a leader on the artificial intelligence company's superalignment group, announced he was quitting on Tuesday night with a blunt post on X: "I resigned."

Leike has now shared more about his exit — and said OpenAI isn't taking safety seriously enough.

"Over the past years, safety culture and processes have taken a backseat to shiny products," Leike wrote in a lengthy X thread on Friday.

In his posts, Leike said he joined OpenAI because he thought it would be the best place to research how to "steer and control" artificial general intelligence, a thus far hypothetical version of AI that would be able to think faster than a human.

"However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," Leike wrote.

The former OpenAI executive said the company should be keeping most of its attention on issues of "security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics."

But Leike said his team, which was working on how to align AI systems with what's best for humanity, was "sailing against the wind" at OpenAI.

"We are long overdue in getting incredibly serious about the implications of AGI," he wrote, adding that, "OpenAI must become a safety-first AGI company."

Leike capped off his thread with a note to OpenAI employees, encouraging them to shift the company's safety culture.

"I am counting on you. The world is counting on you," he said.

OpenAI CEO Sam Altman responded to Leike's thread on X.

"I'm super appreciative of @janleike's contributions to OpenAI's alignment research and safety culture, and very sad to see him leave," Altman said on X. "He's right we have a lot more to do; we are committed to doing it. I'll have a longer post in the next couple of days."

He ended the message with an orange-heart emoji.

Resignations at OpenAI

Leike and Ilya Sutskever, the other superalignment team leader, announced they were leaving OpenAI within hours of each other.

In a statement on X, Altman praised Sutskever as "easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend."

"OpenAI would not be what it is without him," Altman wrote. "Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together."

Wired reported on Friday that OpenAI had disbanded the pair's AI-risk team. It said that researchers who were investigating the dangers of AI would now be absorbed into other parts of the company.

The AI company — which recently debuted its new large language model, GPT-4o — has been rocked by high-profile shake-ups in the last few weeks.

In addition to Leike and Sutskever's departure, The Information reported that Diane Yoon, the former vice president of people, and Chris Clark, the former head of nonprofit and strategic initiatives, have left OpenAI. And last week, BI reported that two other researchers working on safety quit the company.

One of those researchers later said that he'd lost confidence that OpenAI would "behave responsibly around the time of AGI."


Advertisement

Advertisement