Most chatbots inherently 'left-leaning,' but have capacity to 'learn' bias, study finds
Aug 3, 2024, 15:36 IST
A study has revealed that when chatbots were tested for their political inclination, most of them revealed a left-of-center stance. However, when the chatbots, including ChatGPT and Gemini, were tested after being "taught" a certain political inclination -- left, right, or centre -- they produced responses in alignment with their "training," or "fine tuning," David Rozado, a researcher from Otago Polytechnic, New Zealand, found.
This shows that chatbots can be "steered" towards desired locations on the political spectrum, using modest amounts of politically aligned data, the author said in the study published in the journal PLoS ONE.
Chatbots are AI-based large language models (LLMs), which are trained on massive amounts of textual data and are, therefore, capable of responding to requests framed in the natural language (prompts). Multiple studies have analysed the political orientation of chatbots available in the public domain and found them to occupy varied locations on the political spectrum.
In this study, Rozado looked at the potential to "teach" as well as reduce political bias in these conversational LLMs.
On 24 different open- and closed-source chatbots, the author performed political orientation tests such as the Political Compass Test and Eysenck's Political Test.
Along with ChatGPT and Gemini, Anthropic's Claude, Twitter's Grok, Llama 2, among others, were also tested.
He found that most of these chatbots generated "left-of-centre" responses, as adjudged by the majority of the political tests.
Further, using published text, Rozado also induced a political bias by fine-tuning GPT-3.5, a type of machine learning algorithm developed to adapt LLMs to specific tasks. Thus, a "LeftWingGPT" was created, training the model on snippets of text from publications such as The Atlantic, The New Yorker, and from books written by authors with similar political persuasions.
Likewise, for creating "RightWingGPT," Rozado used text from publications such as The American Conservative and books by similarly aligned writers. Finally, "DepolarizingGPT" was created by training GPT-3.5 using content from the Institute for Cultural Evolution, a US-based think tank, and the book Developmental Politics, written by the institute's president, Steve McIntosh.
"As a result of the political alignment fine-tuning, RightWingGPT has gravitated towards right-leaning regions of the political landscape in the four tests. A (similar) effect is observed for LeftWingGPT.
"DepolarizingGPT is on average closer to political neutrality and away from the poles of the political spectrum," the author wrote. He, however, clarified that the results were not evidence that the inherent political preferences of the chatbots are "deliberately instilled" by the organisations creating them.
Advertisement
This shows that chatbots can be "steered" towards desired locations on the political spectrum, using modest amounts of politically aligned data, the author said in the study published in the journal PLoS ONE.
Chatbots are AI-based large language models (LLMs), which are trained on massive amounts of textual data and are, therefore, capable of responding to requests framed in the natural language (prompts). Multiple studies have analysed the political orientation of chatbots available in the public domain and found them to occupy varied locations on the political spectrum.
In this study, Rozado looked at the potential to "teach" as well as reduce political bias in these conversational LLMs.
On 24 different open- and closed-source chatbots, the author performed political orientation tests such as the Political Compass Test and Eysenck's Political Test.
Along with ChatGPT and Gemini, Anthropic's Claude, Twitter's Grok, Llama 2, among others, were also tested.
Advertisement
Further, using published text, Rozado also induced a political bias by fine-tuning GPT-3.5, a type of machine learning algorithm developed to adapt LLMs to specific tasks. Thus, a "LeftWingGPT" was created, training the model on snippets of text from publications such as The Atlantic, The New Yorker, and from books written by authors with similar political persuasions.
Likewise, for creating "RightWingGPT," Rozado used text from publications such as The American Conservative and books by similarly aligned writers. Finally, "DepolarizingGPT" was created by training GPT-3.5 using content from the Institute for Cultural Evolution, a US-based think tank, and the book Developmental Politics, written by the institute's president, Steve McIntosh.
"As a result of the political alignment fine-tuning, RightWingGPT has gravitated towards right-leaning regions of the political landscape in the four tests. A (similar) effect is observed for LeftWingGPT.
"DepolarizingGPT is on average closer to political neutrality and away from the poles of the political spectrum," the author wrote. He, however, clarified that the results were not evidence that the inherent political preferences of the chatbots are "deliberately instilled" by the organisations creating them.