scorecard
  1. Home
  2. artificial intelligence
  3. news
  4. Silicon Valley's battle over AI risks is escalating fast

Silicon Valley's battle over AI risks is escalating fast

Beatrice Nolan   

  • Silicon Valley is at war over AI safety.
  • A group of workers associated with leading AI companies have issued a list of demands.

There's a battle in Silicon Valley over AI risks and safety — and it's escalating fast.

While tensions have been simmering in recent weeks, they took center stage on Tuesday after a group of workers associated with leading AI companies issued a list of demands in an open letter.

The current and former employees of OpenAI, Anthropic, and Google DeepMind called for more protection for whistleblowers, anonymous reporting channels, and, most importantly, the elimination of non-disparagement agreements.

The signatories, which featured several current and former OpenAI employees, cited the need to be able to raise risk-related concerns about AI both internally and with the public.

This has been a point of contention for OpenAI. Several staffers have recently left, criticizing executives' commitment to safeguarding AI on the way out.

The company's policy of strict NDAs for departing staff has also come under scrutiny after reports emerged that OpenAI had been leveraging ex-staffers' vested equity to guarantee their silence.

The company recently released the employees from the super-strict agreements — now some are fighting back.

Right to Warn

While the concerns around AI safety are nothing new, they're increasingly being amplified by those within AI companies.

Independent experts have long warned of AI's potential risk to humanity. The employees' "Right to Warn" letter is endorsed by some of these notable industry names, including "AI godfathers" Yoshua Bengio and Geoffrey Hinton and leading computer scientist Stuart Russell.

While some governments are making attempts to regulate AI, safety initiatives are still largely "opt-in" for companies, meaning they won't face any consequences if promises go unfulfilled.

Jacob Hilton, a former OpenAI employee who signed the letter, told Business Insider it was important that employees were able to speak freely about AI safety concerns.

"There's a lot of commercial pressure on these companies to cut corners," Hilton said. "But with a voluntary commitment, there aren't any consequences for breaking them."

"If employees are kept quiet, then the public may never know that the company had broken commitments until something bad happens."

While Hilton said OpenAI had assured former employees it didn't intend to enforce the non-disparagement agreements, he said the company still needed to clarify whether employees could sell their vested equity.

He added the loophole was still a "potential avenue for retaliation."

OpenAI under pressure

Tensions have been rising at OpenAI for some time, and the public letter will likely increase pressure on CEO Sam Altman.

Since the November coup, speculation has been swirling about what really caused issues inside the capped-profit company.

Last month, ex-board member Helen Toner said the decision to oust Altman was partly taken after he made it "basically impossible" for the board to understand whether the safety measures in place for AI development were sufficient.

Russell, a leading AI expert who publicly endorsed the letter, told BI that OpenAI's high-profile resignations resulted from the company's drive to build artificial general intelligence capabilities without figuring out how to make them safe.

He accused companies like OpenAI of "undermining every attempt made to regulate" them in favor of protecting their commercial interests.

Russell added that tech companies were "ramping up their spending on lobbying and have a very long-standing tradition of convincing legislators."

Former employees have also raised questions about OpenAI's commitment to AI safety.

Daniel Kokotajlo, another former OpenAI staffer who signed the letter, said tech companies were racing to develop powerful AI while disregarding the tech's risks.

He said his decision to leave came after he "lost hope that they would act responsibly, particularly as they pursue artificial general intelligence."

"They and others have bought into the 'move fast and break things' approach and that is the opposite of what is needed for technology this powerful and this poorly understood," Kokotajlo said.

OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours.

A spokesperson previously reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee.

"We're proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk," they said in an email.

"We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world."


Advertisement

Advertisement