+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Former OpenAI researchers warn of 'catastrophic harm' after the company opposes AI safety bill

Aug 24, 2024, 11:18 IST
Business Insider
Ex-OpenAI researchers warned of "catastrophic harm" in a letter to California Gov. Gavin Newsom after the company, led by CEO Sam Altman, opposed SB 1047, a proposed AI safety bill. Andrew Caballero-Reynolds/AFP/Getty Images
  • OpenAI opposes a proposed AI bill imposing strict safety protocols, including a "kill switch."
  • Two former OpenAI researchers said the company's opposition is disappointing but not surprising.
Advertisement

Two former OpenAI researchers are speaking out against the company's opposition to SB 1047, a proposed California bill that would implement strict safety protocols in the development of AI, including a "kill switch."

The former employees wrote in a letter first shared with Politico to California Gov. Gavin Newsom and other lawmakers that OpenAI's opposition to the bill is disappointing but not surprising.

"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing," the researchers, William Saunders and Daniel Kokotajlo, wrote in the letter. "But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems."

Saunders and Kokotajlo did not immediately respond to requests for comment from Business Insider.

They continued: "Developing frontier AI models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public."

Advertisement

OpenAI CEO Sam Altman has repeatedly and publicly supported the concept of AI regulation, Saunders and Kokotajlo wrote, citing Altman's congressional testimony calling for government intervention, but "when actual regulation is on the table, he opposes it."

A spokesperson for OpenAI told BI in a statement: "We strongly disagree with the mischaracterization of our position on SB 1047. " The spokesperson directed BI to a separate letter written by OpenAI's Chief Strategy Officer, Jason Kwon, to California Senator Scott Wiener, who introduced the bill, explaining the company's opposition.

SB1047 "has inspired thoughtful debate," and OpenAI supports some of its safety provisions, Kwon's letter, dated a day before the researchers' letter was sent, read. However, due to the national security implications of AI development, the company believes regulation should be "shaped and implemented at the federal level."

"A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards," Kwon's letter read.

But Saunders and Kokotajlo aren't convinced the push for federal legislation is the sole reason OpenAI opposes California's SB 1047, saying the company's complaints about the bill "are not constructive and don't seem in good faith."

Advertisement

"We cannot wait for Congress to act — they've explicitly said that they aren't willing to pass meaningful AI regulation," Saunders and Kokotajlo wrote. "If they ever do, it can preempt CA legislation."

The former OpenAI employees concluded: "We hope that the California Legislature and Governor Newsom will do the right thing and pass SB 1047 into law. With appropriate regulation, we hope OpenAI may yet live up to its mission statement of building AGI safely."

Representatives for Wiener and Newsom did not immediately respond to requests for comment from BI.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article