+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

A former OpenAI safety employee said he quit because the company's leaders were 'building the Titanic' and wanted 'newer, shinier' things to sell

Jul 10, 2024, 15:35 IST
Business Insider
Sam Altman, CEO of OpenAI, arrives at the Allen & Company Sun Valley Conference on July 9, 2024 in Sun Valley, Idaho.Kevork Djansezian/Getty Images
  • An ex-OpenAI employee said the firm is going down the path of the Titanic with its safety decisions.
  • William Saunders warned of the hubris around the safety of the Titanic, which had been deemed "unsinkable."
Advertisement

A former safety employee at OpenAI said the company is following in the footsteps of White Star Line, the company that built the Titanic.

"I really didn't want to end up working for the Titanic of AI, and so that's why I resigned," said William Saunders, who worked for three years as a member of technical staff on OpenAI's superalignment team.

He was speaking on an episode of tech YouTuber Alex Kantrowitz's podcast, released on July 3.

"During my three years at OpenAI, I would sometimes ask myself a question. Was the path that OpenAI was on more like the Apollo program or more like the Titanic?" he said.

The software engineer's concerns stem largely from OpenAI's plan to achieve Artificial General Intelligence — the point where AI can teach itself — while also debuting paid products.

Advertisement

"They're on this trajectory to change the world, and yet when they release things, their priorities are more like a product company. And I think that is what is most unsettling," Saunders said.

Apollo vs Titanic

As Saunders spent more time at OpenAI, he felt leaders were making decisions more akin to "building the Titanic, prioritizing getting out newer, shinier products."

He would have much preferred a mood like the Apollo space program's, which he characterized as an example of an ambitious project that "was about carefully predicting and assessing risks" while pushing scientific limits.

"Even when big problems happened, like Apollo 13, they had enough sort of like redundancy, and were able to adapt to the situation in order to bring everyone back safely," he said.

The Titanic, on the other hand, was built by White Star Line as it competed with its rivals to make bigger cruise liners, Saunders said.

Advertisement

Saunders fears that, like with the Titanic's safeguards, OpenAI could be relying too heavily on its current measures and research for AI safety.

"Lots of work went into making the ship safe and building watertight compartments so that they could say that it was unsinkable," he said. "But at the same time, there weren't enough lifeboats for everyone. So when disaster struck, a lot of people died."

To be sure, the Apollo missions were conducted against the backdrop of a Cold War space race with Russia. They also involved several serious casualties, including three NASA astronauts who died in 1967 due to an electrical fire during a test.

Explaining his analogy further in an email to Business Insider, Saunders wrote: "Yes, the Apollo program had its own tragedies. It is not possible to develop AGI or any new technology with zero risk. What I would like to see is the company taking all possible reasonable steps to prevent these risks."

OpenAI needs more 'lifeboats,' Saunders says

Saunders told BI that a "Titanic disaster" for AI could manifest in a model that can launch a large-scale cyberattack, persuade people en masse in a campaign, or help build biological weapons.

Advertisement

In the near term, OpenAI should invest in additional "lifeboats," like delaying the release of new language models so teams can research potential harms, he said in his email.

While in the superalignment team, Saunders led a group of four staff dedicated to understanding how AI language models behave — which he said humans don't know enough about.

"If in the future we build AI systems as smart or smarter than most humans, we will need techniques to be able to tell if these systems are hiding capabilities or motivations," he wrote in his email.

Ilya Sutskever, cofounder of OpenAI, left the firm in June after leading its superalignment division.JACK GUEZ/AFP via Getty Images

In his interview with Kantrowitz, Saunders added that company staff often discussed theories about how the reality of AI becoming a "wildly transformative" force could come in just a few years.

"I think when the company is talking about this, they have a duty to put in the work to prepare for that," he said.

Advertisement

But he's been disappointed with OpenAI's actions so far.

In his email to BI, he said: "While there are employees at OpenAI doing good work on understanding and preventing risks, I did not see a sufficient prioritization of this work."

Saunders left OpenAI in February. The company then dissolved its superalignment team in May, just days after announcing GPT-4o, its most advanced AI product available to the public.

OpenAI did not immediately respond to a request for comment sent outside regular business hours by Business Insider.

Tech companies like OpenAI, Apple, Google, and Meta have been engaged in an AI arms race, sparking investment furor in what is widely predicted to be the next great industry disruptor akin to the internet.

Advertisement

The breakneck pace of development has prompted some employees and experts to warn that more corporate governance is needed to avoid future catastrophes.

In early June, a group of former and current employees at Google's Deepmind and OpenAI — including Saunders — published an open letter warning that current industry oversight standards were insufficient to safeguard against disaster for humanity.

Meanwhile, OpenAI cofounder and former chief scientist Ilya Sutskever, who led the firm's superalignment division, resigned later that month.

He founded another startup, Safe Superintelligence Inc., that he said would focus on researching AI while ensuring "safety always remains ahead."

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article