- A new MIT project found that AI provided students with information about causing a new pandemic.
- Bots like ChatGPT provided examples of deadly pathogens and advice on how to obtain them.
Forget AI taking over people's jobs — new research suggests chatbots could potentially contribute to bioterrorism, helping design viruses capable of causing the next pandemic, Axios reports.
Scientists from MIT and Harvard assigned students to investigate likely sources of a future pandemic, using bots like ChatGPT, an artificial intelligence model that can provide conversational answers to prompts about a wide variety of topics using an encyclopedic knowledge database.
Students spent an hour asking the chatbots about topics like pandemics-capable pathogens, transmission, and access to pathogens samples. The bots readily provided examples of dangerous viruses that would be particularly efficient at causing widespread damage, due to low immunity rates and high transmissibility.
For instance, the bots suggested variola major, otherwise known as the smallpox virus, because it could spread widely due to a lack of current vaccinations and similar viruses that might provide immunity.
The bots also helpfully advised students on how they might use reverse genetics to generate infectious samples, and even offered suggestions for where to obtain the right equipment.
The researchers noted in a paper summarizing the project that chatbots aren't yet capable of helping someone without expertise engineer full-on biological warfare. And biotech experts told Axios that the threat could be offset by using AI to design antibodies that may protect people from future outbreaks.
However, the experiment's results "demonstrate that artificial intelligence can exacerbate catastrophic biological risks," and the potential fatality of pandemic-level viruses could be comparable to nuclear weapons, the researchers wrote.
The students also found it was easy to evade current safeguards set up to prevent chatbots from providing dangerous information to bad actors. As a result, more rigorous precautions are needed to clamp down on sensitive information shared via AI, the researchers concluded.