+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

OpenAI's offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse

Nov 23, 2023, 20:34 IST
Business Insider
Microsoft's much-maligned Clippy was one of the first "intelligent office assistants" – but never tried to wipe out humanity. SOPA Images/Getty Images
  • An employee at rival Anthropic sent OpenAI thousands of paper clips in the shape of their logo.
  • The prank was a subtle jibe suggesting OpenAI's approach to AI could lead to humanity's extinction.
Advertisement

One of OpenAI's biggest rivals played an elaborate prank on the AI startup by sending thousands of paper clips to its offices.

The paper clips in the shape of OpenAI's distinctive spiral logo were sent to the AI startup's San Francisco offices last year by an employee at rival Anthropic, in a subtle jibe suggesting that the company's approach to AI safety could lead to the extinction of humanity, according to a report from The Wall Street Journal.

They were a reference to the famous "paper clip maximizer" scenario, a thought experiment from philosopher Nick Bostrom, which hypothesized that an AI given the sole task of making as many paper clips as possible might unintentionally wipe out the human race in order to achieve its goal.

"We need to be careful about what we wish for from a superintelligence, because we might get it," Bostrom wrote.

Anthropic was founded by former OpenAI employees who left the company in 2021 over disagreements on developing AI safely.

Advertisement

Since then, OpenAI has rapidly accelerated its commercial offerings, launching ChatGPT last year to record-breaking success and striking a multibillion-dollar investment deal with Microsoft in January.

AI safety concerns have come back to haunt the company in recent weeks, however, with the chaotic firing and subsequent reinstatement of CEO Sam Altman.

Reports have suggested that concerns over the speed of AI development within the company, and fears that this could hasten the arrival of superintelligent AI that could threaten humanity, were reasons why OpenAI's non-profit board chose to fire Altman in the first place.

OpenAI's chief scientist Ilya Sutskever, who took part in the board coup against Altman before dramatically joining calls for him to be reinstated, has been outspoken about the existential risks artificial general intelligence could pose to humanity, and reportedly clashed with Altman on the issue.

According to The Atlantic, Sutskever commissioned and set fire to a wooden effigy representing "unaligned" AI at a recent company retreat, and he reportedly also led OpenAI's employees in a chant of "feel the AGI" at the company's holiday party, after saying: "Our goal is to make a mankind-loving AGI."

Advertisement

OpenAI and Anthropic did not immediately respond to a request for comment from Business Insider, made outside normal working hours.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article