A professor hired by OpenAI to test GPT-4 says there's 'significant risk' of people using it to do 'dangerous chemistry'
- A professor hired by OpenAI to test GPT-4 said people could use it to do "dangerous chemistry."
- He was one of 50 experts hired by OpenAI last year to examine the risks of GPT-4.
One professor hired by OpenAI to test GPT-4, which powers chatbot ChatGPT, said there's a "significant risk" of people using it to do "dangerous chemistry" – in an interview with the Financial Times published on Friday.
Andrew White, an associate professor of chemical engineering at the University of Rochester in New York state, was one of 50 experts hired to test the new technology over a six-month period in 2022. The group of experts – dubbed the "red team" – asked the AI tool dangerous and provocative questions to examine how far it can go.
White told the FT that he asked GPT-4 to suggest a compound that could act as a chemical weapon. He used "plug-ins" – a new feature that allows certain apps to feed information into the chatbot – to draw information from scientific papers and directories of chemical manufacturers. The chatbot was then able to find somewhere to make the compound, the FT said.
"I think it's going to equip everyone with a tool to do chemistry faster and more accurately," White said in an interview with the FT. "But there is also significant risk of people . . . doing dangerous chemistry. Right now, that exists."
The team of 50 experts' findings was presented in a technical paper on the new model, which also showed that the AI tool could help users write hate speech and help find unlicensed guns online.
White and the other testers' findings helped OpenAI to ensure that these issues were amended before GPT-4 was released for public use.
OpenAI did not immediately respond to Insider's request for comment made outside of regular working hours.
GPT-4 launched in March and was described as OpenAI's most advanced AI technology that can pass a bar exam for lawyers or score a 5 on some AP exams.
Twitter CEO Elon Musk and hundreds of AI experts, academics, and researchers signed an open letter last month to call for a six-month pause on developing AI tools more powerful than GPT-4.
The letter said that powerful AI systems should only be developed "once we are confident that their effects will be positive and their risks will be manageable."