+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

AI-generated answers didn’t just go undetected in exams, they also scored higher than the actual students: UK study

Jun 27, 2024, 09:39 IST
Business Insider India
Representational imageiStock
When artificial intelligence (AI) tools like ChatGPT burst onto the scene, students around the globe had a predictable, almost reflexive response: How could these digital wonders be used to complete their homework and assignments?
Advertisement

Over time, generative AI has become more advanced, and so have concerns about students using these tools to cheat by passing off AI-generated work as their own. The shift from supervised in-person exams to unsupervised exams at home—a trend that began during the COVID-19 pandemic and continues today in many institutions—has exacerbated these worries. Current tools designed to detect AI-generated text have yet to prove highly effective either.

Now, in a revealing test of the University of Reading's examination system in the UK, AI-generated submissions went largely unnoticed. What’s more, these AI-crafted answers also ended up receiving higher grades than those of genuine students!

AI submissions—the new normal?


To delve deeper into this issue, Peter Scarfe of the University of Reading and his team submitted responses generated entirely by the AI chatbot GPT-4 on behalf of 33 fictitious students to the School of Psychology and Clinical Language Sciences. The exam graders were unaware of the experiment.

The study revealed that 94% of the AI submissions were not identified as such. Moreover, the AI-generated answers generally scored higher than those of actual students. Specifically, 83.4% of these submissions earned better grades than a randomly selected set of real student submissions.

Advertisement

These findings suggest not only that students could successfully use AI to cheat, but they might also achieve superior grades compared to their peers who do not engage in such practices. The researchers also raised the possibility that some genuine students might have already succeeded in submitting AI-generated work unnoticed.

From the perspective of academic integrity, these results are deeply troubling. The researchers recommend a return to supervised, in-person examinations as a potential solution to this problem.

However, as AI tools continue to evolve and permeate professional environments, universities may need to focus on integrating AI into education rather than resisting it. Embracing this "new normal" could enhance the learning process.

The findings of this ‘real-world test of AI infiltration of a university examinations system’ challenge the global education sector to adapt, and this is precisely what the University of Reading is pursuing. New policies and guidelines for staff and students are being developed to acknowledge both the risks and the opportunities presented by AI technologies.

The case study was published in the journal PLoS ONE and can be accessed here.
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article