scorecard
  1. Home
  2. tech
  3. news
  4. Some universities are ditching AI detection software amid fears students could be falsely accused of cheating by using ChatGPT

Some universities are ditching AI detection software amid fears students could be falsely accused of cheating by using ChatGPT

Tom Carter   

Some universities are ditching AI detection software amid fears students could be falsely accused of cheating by using ChatGPT
Tech2 min read
  • Several major universities say they have stopped using AI detection tools over accuracy concerns.
  • They say that tools built to spot essays written by AI could lead to students being falsely accused of cheating.

Universities are going back to the drawing board to figure out how to stop their students using ChatGPT to write essays, after giving up on AI detectors over accuracy concerns.

Several major universities have stopped using AI detection tools provided by anti-plagiarism company Turnitin over fears that the technology could lead to students being falsely accused of cheating, according to a report from Bloomberg.

The decisions come despite the soaring popularity of ChatGPT with students and the increasing concerns among educators that it is fueling a cheating epidemic.

"After several months of using and testing this tool, meeting with Turnitin and other AI leaders, and talking to other universities who also have access, Vanderbilt has decided to disable Turnitin's AI detection tool for the foreseeable future," Vanderbilt University said in a blog post published in August.

The university said that the detection tool had a 1% false positive rate at launch, which it estimates would have led to around 750 of the 75,000 papers it submitted to Turnitin last year being incorrectly labeled as written with AI.

Northwestern University also said in a post on its website that it was turning Turnitin's AI detector off after a series of consultations, and that the university did not recommend using it to check students' work.

Art Markman, vice provost for academic affairs at the University of Texas, told Bloomberg that his university had stopped using the tool over accuracy concerns.

"If we felt they were accurate enough, then having these tools would be great," he said. "But we don't want to create a situation where students are falsely accused."

Educators have been experimenting with ways to deal with the popularity of generative AI tools like ChatGPT among students, with mixed results.

A Texas professor came under fire for failing half his class after ChatGPT incorrectly identified their essays as written by AI. Other students have reported being falsely accused of using AI by anti-plagiarism software.

Working out when text has been written by AI is notoriously difficult. ChatGPT's creator OpenAI scrapped its own AI text detector tool due to "low rates of accuracy", and warned educators that AI content detectors are not reliable in its recent back-to-school guide.

The company confirmed that many detector tools had a tendency to incorrectly identify work written by non-English authors as AI-generated, something Vanderbilt identified as a concern.

Turnitin told Insider in a statement that its AI detection software was not designed to be used to punish students. "Turnitin's technology is not meant to replace educators' professional discretion," said its chief product officer, Annie Chechitelli.


Advertisement

Advertisement