+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

OpenAI employees are demanding change. Here are the 4 things they want.

Jun 5, 2024, 01:25 IST
Business Insider
Sam Altman and other leaders of AI are under fire by former and current employees.Jack Guez/Getty Images; Jenny Chang-Rodriguez/BI
  • Current and former employees at top AI companies are speaking out about the risks of AI.
  • At least nine OpenAI insiders signed a letter calling for more protection for whistleblowers.
Advertisement

A group of nine current and former OpenAI employees signed a letter calling out tech firms over major concerns about the risks of artificial intelligence.

In their letter, the tech workers called for more transparency in AI companies and better protections for whistleblowers who raise concerns about AI's power.

"We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity," the letter said.

"We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," it continued. "AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts."

A total of 13 people signed the letter, and they come from some of the top companies in AI, including OpenAI, Anthropic, and DeepMind. It was also endorsed by two men known as the "Godfathers of AI," Yoshua Bengio and Geoffrey Hinton.

Advertisement

"I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence," Daniel Kokotajlo, a former OpenAI employee who signed the letter, said in a statement.

"They and others have bought into the 'move fast and break things' approach and that is the opposite of what is needed for technology this powerful and this poorly understood," he added.

The AI employees outlined four demands they said would help mitigate existing issues of inequality and misinformation in AI.

Here are the four principles the 13 employees said they want OpenAI and other AI companies to adopt:

  1. That the company will not enter into or enforce any agreement that prohibits "disparagement" or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
  2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise;
  3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company's board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
  4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.

An OpenAI spokesperson told Business Insider that the company was "proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk."

Advertisement

"We agree that rigorous debate is crucial given the significance of this technology, and we'll continue to engage with governments, civil society and other communities around the world," the statement continued.

Several OpenAI employees have departed over the past several weeks, including Ilya Sutskever, an OpenAI cofounder and former board member who voted to remove Sam Altman as CEO before expressing regret, and Gretchen Krueger, a policy researcher who shared concerns about transparency and accountability at the ChatGPT maker.

After The Economist last week published an op-ed article by the former OpenAI board members Helen Toner and Tasha McCauley criticizing Altman and his company's safety practices, current board members came to his defense.

In their own op-ed article, Bret Taylor and Larry Summers pushed back on the claims and said the board was "taking commensurate steps to ensure safety and security."

Representatives for Google DeepMind and Anthropic did not immediately respond to a request for comment from Business Insider ahead of publication.

Advertisement
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article