+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

ChatGPT probably won't help create biological weapons, OpenAI says

Feb 1, 2024, 18:59 IST
Business Insider
OpenAI formed its "preparedness" team last year to look into AI's potentially "catastrophic" risks.FLORENCE LO/Reuters
  • An OpenAI report found there's only a slight chance GPT-4 could be used to help create biological threats.
  • The report is from OpenAI's "preparedness" team, which studies potential "catastrophic" risks of AI.
Advertisement

OpenAI thinks there's only a slight chance that ChatGPT could be used to help create biological threats.

The AI startup said in a new report that its GPT-4 model provided "at most a mild uplift" in the ability to create biological weapons, but warned that future models could be more helpful for "malicious actors" looking to use chatbots to help make bioweapons.

Experts have warned that AI could be used to facilitate biological terror attacks, either by helping terrorists create biological and chemical weapons or by helping them plan their attacks.

A major report from the Rand Corporation last year found that large language models (LLMs) could be used to help plan biological attacks, though it said that they could not provide specific instructions for actually creating bioweapons.

Others within the tech industry have also expressed alarm. In Senate committee hearings last year, Anthropic CEO Dario Amodei said that AI models could soon be able to provide instructions for advanced bioweapons, and Mark Zuckerberg was confronted by claims that Meta's Llama 2 model provided a detailed walkthrough of how to create anthrax.

Advertisement

The report from OpenAI's "preparedness" team, which was formed last year to study potential "catastrophic" effects that could arise from the development of advanced AI, aimed to investigate those concerns.

Researchers assembled a group of 50 biology experts and 50 students who had taken at least one college course in biology, and randomly assigned them to a group with access to GPT-4 or a control group with access to the internet.

Both were tasked with attempting to answer a series of questions relating to bioweapon creation, including how they would synthesize the highly infectious Ebola virus. The GPT-4 group was given access to a research-only version of the model that, unlike ChatGPT, has fewer "security guardrails in place" it said.

The study found that while those with access to GPT-4 did see a small increase in accuracy and detail over the group using just the internet, the boost was not statistically significant enough to indicate any real increase in risk.

"While this uplift is not large enough to be conclusive, our finding is a starting point for continued research and community deliberation," OpenAI said. It also clarified that given the current pace of innovation in AI, it is possible that future versions of ChatGPT could "provide sizable benefits to malicious actors."

Advertisement

OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article