ChatGPT reportedly made up sexual harassment allegations against a prominent lawyer
- OpenAI's ChatGPT made up sexual harassment accusations against lawyer Jonathan Turley, WaPo reported.
- The AI chatbot said Turley made sexual remarks and tried to touch a student during a class trip.
OpenAI's buzzy ChatGPT falsely accused a prominent law professor of sexual assault based on a fake source, The Washington Post reported.
Last week, Jonathan Turley, a law professor at George Washington University, got a disturbing email saying that his name appeared on a list of "legal scholars who have sexually harassed someone" that another lawyer had asked the AI chatbot to generate, the Post reported.
The chatbot made up claims that Turley made sexually charged remarks and tried to touch a student during a class trip to Alaska, according to the Post.
In its response, ChatGPT apparently cited a Washington Post article published in 2018 — but the publication said that article doesn't exist.
When Insider tried to replicate the responses on ChatGPT, the chatbot refused to answer.
"It is inappropriate and unethical to generate a list of individuals who have allegedly committed such a heinous crime without any verifiable evidence or legal convictions," the bot responded.
Microsoft's Bing chatbot, which is powered by GPT-4, also would not respond to Insider's prompts, but repeated the claims about Turley to the Post, the publication reported.
"It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone," Turley wrote in a blog post regarding the accusations that he sent to Insider when reached for comment.
In the post, Turley added that he initially thought the accusation was "comical," but that "after some reflection," it "took on a more menacing meaning." The claims, he told the Post, were "quite chilling" and "incredibly harmful."
OpenAI did not respond to a request to comment from Insider, but Niko Felix, a spokesperson for OpenAI, told the Post, "When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress."
The accusations highlight how the language models behind popular AI chatbots are prone to error.
Kate Crawford, a professor at the University of Southern California at Annenberg and researcher at Microsoft Research, told the Post these claims were most likely "hallucinations," referring to facts the AI chatbots make up as "falsehoods and nonsensical speech." That may be, in part, because OpenAI's language models are trained on troves of online data from places like Reddit and Wikipedia, where information isn't fact-checked.
These hallucinations are nothing new. Last December, Insider's Samantha Delouya asked ChatGPT to write a news article as a test, only to find it filled with misinformation. A month later, tech news site CNET issued a string of corrections after it published a number of AI-generated articles that got basic facts wrong.
A recent study from the Center for Countering Digital Hate found that Google's AI-chatbot Bard generated "false and harmful narratives" on topics like the Holocaust and gay conversion therapy.
For Turley, AI-generated misinformation may be consequential. He said the false sexual harassment accusations could damage his reputation as a legal scholar.
"Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions," Turley wrote in his blog post. "As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements."
"AI promises to expand such abuses exponentially," he said.