- Hackers infiltrated OpenAI in early 2023, but the company chose not to disclose the security breach.
- News of the hack has raised concerns that OpenAI is vulnerable to foreign agents, especially China.
The hits just keep on coming for booming tech giant OpenAI. The latest addition to its list of problems? Hackers.
OpenAI, despite its name, has developed a reputation for secrecy. But it was still surprising to learn that hackers gained entry to the company's internal messaging system all the way back in early 2023, stealing information related to its AI designs, and that the company never mentioned it to anyone.
Two people with knowledge of the incident told The New York Times that OpenAI executives decided not to publicly disclose the hacking because no customer or partner information was compromised. OpenAI also did not report the hack to the police or the FBI.
OpenAI told Business Insider that the company had "identified and fixed" the "underlying security issue" that led to the breach. The company said the hacker was a private individual without government affiliation and that no source code repositories were impacted.
Still, the hacking prompted concern inside and outside the company that OpenAI's security is too weak, leaving it open to foreign adversaries like China.
While the United States leads the global AI arms race, China is not far behind. US officials consider China's use of AI a major potential security threat. So the idea that OpenAI's data and systems are penetrable is worrisome.
Employees inside the company have also expressed concern about its attention to security. Leopold Aschenbrenner, a now former OpenAI board member, said the company fired him in April after he sent a memo detailing a "major security incident." He described the company's security as "egregiously insufficient" to protect against theft by foreign actors.
OpenAI has denied that it fired Aschenbrenner for raising security concerns.
Aschenbrenner was a member of the company's "superalignment" team, which worked to ensure the safe development of OpenAI's technology. A month after OpenAI fired Aschenbrenner, two more of the team's top members quit, and the team effectively dissolved.
One of them was OpenAI cofounder and chief scientist Ilya Sustkever. He announced his departure just six months after he helped spearhead OpenAI CEO Sam Altman's failed ouster, partly due to disagreements the two men had over the safety of the technology. Hours after Sustkever announced his departure, his colleague Jan Leike also left.
After the drama settled, last month OpenAI created a new safety and security committee, adding former NSA director Paul Nakasone to lead the group. Nakasone, now the newest OpenAI board member, is the former head of the US Cyber Command — the cybersecurity division of the Defense Department.
While Nakasone's presence signals that OpenAI is taking security more seriously, his addition was also not without controversy. Edward Snowden, the US whistleblower who leaked classified documents detailing government surveillance in 2013, said in a post on X that Nakosone's hiring was a "calculated betrayal to the rights of every person on Earth."