scorecard
  1. Home
  2. tech
  3. AI
  4. news
  5. ​People find ChatGPT to have a better moral compass than real humans, study reveals

​People find ChatGPT to have a better moral compass than real humans, study reveals

​People find ChatGPT to have a better moral compass than real humans, study reveals
Tech3 min read
The ‘Terminator’ franchise has long served as a spooky warning of the future to come if we let artificial intelligence run amok. While we certainly haven’t reached ‘Age of Ultron’ levels of AI sophistication yet — much to one Elon Musk’s relief — current AI moral experiments seem to be going the positive way.. for now, at least.

In an era where artificial intelligence is becoming increasingly intertwined with our daily lives, study author Eyal Aharoni wanted to gauge where the moral compass of some famous Chatbot AIs lay. Accordingly, he decided to employ a modified version of Alan Turing’s famous test. Yes, we mean ‘the creator of the computer’ Alan Turing.
The Tantalising Turing Test
Aharoni explains how the test works: “Alan Turing predicted that by the year 2000, computers might pass a test where you present an ordinary human with two interactants — one human and the other a computer — but they’re both hidden and their only way of communicating is through text. Then the human is free to ask whatever questions they want to in order to try to get the information they need to decide which of the two interactants is human and which is the computer.”

In Turing’s opinion, if humans can’t tell who’s who based on the non-disclosed responses, then the computer can be considered sufficiently “intelligent”.
Human: 0 Robot: 1
To conduct his test, Aharoni asked undergraduate students and ChatGPT the same set of ethical questions. Their answers were then presented to the study’s participants, who were asked to evaluate the "human" responses without knowing that one of that was machine-generated.

The results were striking: overwhelmingly, participants rated ChatGPT's responses as superior in virtuousness, intelligence, and trustworthiness compared to human-generated ones. Shockingly enough, when the participants were later explained that one of the answers was AI-generated, most could correctly identify which one it was.

“The twist is that the reason people could tell the difference appears to be because they rated ChatGPT’s responses as superior,” Aharoni noted. “If we had done this study five to 10 years ago, then we might have predicted that people could identify the AI because of how inferior its responses were. But we found the opposite — that the AI, in a sense, performed too well.”
What does this mean for us?
The implications are profound. Aharoni suggests that if AI continues to outperform humans in moral reasoning, it could potentially pass a moral Turing test, blurring the line between human and machine ethics. This poses both opportunities and risks for society.

On one hand, AI's ability to provide morally sound guidance could revolutionise decision-making in fields like law, healthcare, and environmental policy. Lawyers are already consulting AI for case analysis, indicating a growing trust in its capabilities. However, this reliance on AI raises concerns about transparency, accountability, and unintended consequences.

As people increasingly turn to AI for moral guidance, understanding its limitations and biases becomes imperative. Aharoni emphasises the need for greater scrutiny and regulation to ensure that AI aligns with ethical principles and serves the best interests of society.

The study's findings underscore a pivotal moment in human-AI interaction, prompting us to reconsider our relationship with technology and the ethical implications of its expanding role in our lives. As AI's moral prowess evolves, so too must our understanding of its impact on society.

The findings of this research have been published in Scientific Reports and can be accessed here.

READ MORE ARTICLES ON


Advertisement

Advertisement