scorecard
  1. Home
  2. artificial intelligence
  3. news
  4. Former OpenAI board members say the company can't be trusted to govern itself

Former OpenAI board members say the company can't be trusted to govern itself

Lloyd Lee,Hannah Getahun   

Former OpenAI board members say the company can't be trusted to govern itself
  • Two former OpenAI board members have said governments must regulate artificial-intelligence companies.
  • Helen Toner and Tasha McCauley were the only women on the company's board and left in November.

Two former OpenAI board members have said that artificial-intelligence companies can't be trusted to govern themselves and that third-party regulation is necessary to hold them accountable.

Helen Toner and Tasha McCauley were board members at OpenAI before they stepped down in November amid a chaotic push to oust CEO Sam Altman. Altman was swiftly reinstated as CEO days after his dismissal, and he returned to the board five months later.

In an op-ed for The Economist published Sunday, Toner and McCauley wrote that they stood by their decision to remove Altman, citing statements from senior leaders that the cofounder created a "toxic culture of lying" and engaged in behavior that could be "characterized as psychological abuse."

Since Altman returned to the board in March, OpenAI has been questioned about its commitment to safety and criticized for using an AI voice that sounded eerily like the actor Scarlett Johansson for Chat GPT-4o.

With Altman back at the helm, Toner and McCauley wrote that OpenAI couldn't be trusted to hold itself accountable.

"We also feel that developments since he returned to the company —including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance," they wrote.

Toner and McCauley argued that for OpenAI to succeed in its stated mission to benefit "all of humanity," governments needed to intervene and establish "effective regulatory frameworks now."

The former board members wrote that they once believed OpenAI could govern itself, but "based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives."

OpenAI, Toner, and McCauley didn't immediately respond to a request for comment from Business Insider.

Policymakers must 'act independently' of AI companies

Toner and McCauley qualified their calls for government regulation by acknowledging that poorly designed laws could potentially hinder "competition and innovation" by burdening smaller companies.

"It is crucial that policymakers act independently of leading AI companies when developing new rules," they wrote. "They must be vigilant against loopholes, regulatory 'moats' that shield early movers from competition, and the potential for regulatory capture."

In April, the Department of Homeland Security announced the establishment of the Artificial Intelligence Safety and Security Board, which it said would provide recommendations for "safe and secure development and deployment of AI" throughout the US's critical infrastructures.

The board's 22 members include Altman and chief executives of large tech companies, including Nvidia CEO Jensen Huang and Alphabet CEO Sundar Pichai.

Although the safety board also includes representatives from tech nonprofits, leaders of for-profit companies are overrepresented.

AI ethicists who spoke to Ars Technica expressed concern that the outsize influence of profit-motivated companies could result in policies that favored industry over human safety.

"If we can all agree that we care about keeping people 'safe' with respect to how AI is used, then I think we can agree it's important to have people at the table who specialize in centering people over technology," Margaret Mitchell, an AI-ethics expert at Hugging Face, told Ars Technica.

A DHS spokesperson didn't respond to a request for comment.



Popular Right Now



Advertisement