scorecard
  1. Home
  2. artificial intelligence
  3. news
  4. Sam Altman and generative AI can't be trusted, says leading expert

Sam Altman and generative AI can't be trusted, says leading expert

Lloyd Lee   

Sam Altman and generative AI can't be trusted, says leading expert
  • Gary Marcus, founder of Geometric Intelligence, testified before the Senate with Sam Altman in 2023.
  • Once hopeful of Altman, Marcus now says that the OpenAI CEO can't be trusted.

Sam Altman, as the CEO of OpenAI and poster boy for artificial intelligence, has made many calls for an AI that will benefit humanity and regulations to ensure the world will get there.

But a leading AI expert says the CEO's actions contradict his public pronouncements, and the current path to AI is headed in the wrong direction.

In a Saturday column for The Guardian, Gary Marcus, founder of machine learning company Geometric Intelligence and former head of Uber's AI lab, argued that Altman has repeatedly misled the public about his financial stake in OpenAI and questioned how genuine the CEO is when he calls for regulations.

Once hopeful of Altman, Marcus wrote in the column that the OpenAI leader "seemed genuine" and appeared to share the same concerns about AI regulation when they both testified before lawmakers about the technology in May 2023.

"We both came out strongly for AI regulation," Marcus wrote. "Little by little, though, I realized that I, the Senate, and ultimately the American people, had probably been played."

The AI expert challenged Altman's claim that he had no equity in OpenAI, citing the CEO's indirect financial ties to the company through his stake in Y Combinator, a startup incubator Altman once led, and Rain AI, a chips startup that made a $51 million deal with the AI company.

A spokesperson for OpenAI did not immediately return a request for comment.

Marcus also questioned Altman's commitment to safety, writing that although the OpenAI says he wants regulation, "the reality is far more complicated."

He cited a 2023 Time magazine article that reported how OpenAI sought to weaken the EU's AI Act in part by removing language that labeled OpenAI's ChatGPT as "high risk," which would have subjected the company to more restrictive laws.

Marcus also mentioned issues with transparency at OpenAI, where employees were asked to sign restrictive NDAs and former colleagues have accused Altman of lying to the board.

"Presumably Altman doesn't want to live in regret and infamy. But behind closed doors, his lobbyists keep pushing for weaker regulation, or none at all," Marcus wrote.

Beyond Altman, Marcus added that the direction of AI is headed down the wrong path as companies try to catch up to OpenAI.

"Unfortunately, many other AI companies seem to be on the path of hype and corner-cutting that Altman charted," Marcus wrote.

The AI expert also argued that generative AI tools like ChatGPT are "unlikely ever to be safe" and won't provide useful solutions in areas like medicine or climate change.

Generative AI tools "are recalcitrant, and opaque by nature — so-called 'black boxes' that we can never fully rein in," Marcus wrote.

"That said, I don't think we should abandon AI. Making better AI — for medicine, and material science, and climate science, and so on — really could transform the world," he wrote. "Generative AI is unlikely to do the trick, but some future, yet-to-be developed form of AI might."



Popular Right Now



Advertisement