scorecard
  1. Home
  2. artificial intelligence
  3. news
  4. OpenAI founders Sam Altman and Greg Brockman go on the defensive after top safety researchers quit

OpenAI founders Sam Altman and Greg Brockman go on the defensive after top safety researchers quit

Lakshmi Varanasi   

OpenAI founders Sam Altman and Greg Brockman go on the defensive after top safety researchers quit
  • OpenAI's Ilya Sutskever and Jan Leike, who led a team focused on AI safety, resigned.
  • The departures sparked concern about OpenAI's commitment to ensuring AI doesn't destroy the world.

Two of OpenAI's founders, CEO Sam Altman and President Greg Brockman, are on the defensive after a shake-up in the company's safety department this week.

The company's chief scientist, Ilya Sutskever, who is also a founder, announced on X that he was leaving on Tuesday. Hours later, his colleague, Jan Leike, followed suit.

Sutskever and Leike led OpenAI's super alignment team, which was focused on developing AI systems compatible with human interests. That sometimes placed them in opposition with members of the company's leadership who advocated for more aggressive development.

"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," Leike wrote on X on Friday.

Sutskever was among the six board members who tried to oust Altman as CEO in November, though he later said he regretted the move.

After their departures, Altman called Sutskever "one of the greatest minds of our generation" and said he was "super appreciative" of Leike's contributions in posts on X. He also said Leike was right: "We have a lot more to do; we are committed to doing it."

But as public concern continued to mount, Brockman offered more details on Saturday about how OpenAI will approach safety and risk moving forward — especially as it develops artificial general intelligence and builds AI systems that are more sophisticated than chatbots.

In a nearly 500-word post on X that both he and Altman signed, Brockman addressed the steps OpenAI has already taken to ensure the safe development and deployment of the technology.

"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," Brockman wrote.

Altman recently said the best way to regulate AI would be an international agency that ensures reasonable safety testing but also expressed wariness of regulation by government lawmakers who may not fully understand the technology.

Brockman said OpenAI has also established the foundations for safely deploying AI systems more capable than GPT-4.

"As we build in this direction, we're not sure yet when we'll reach our safety bar for releases, and it's ok if that pushes out release timelines," Brockman wrote.

Brockman and Altman added in their post that the best way to anticipate threats is through a "very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities," as well as collaborating with "governments and many stakeholders on safety."

But not everyone is convinced that the OpenAI team is moving ahead with development in a way that ensures the safety of humans, least of all, it seems, the people who, up to a few days ago, led the company's effort in that regard.

"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said.



Popular Right Now



Advertisement