Study Reveals Political Bias in AI Chatbots and Their Fine-Tuning Potential
A study by David Rozado of Otago Polytechnic found that most chatbots, including ChatGPT and Gemini, exhibited a left-of-centre political inclination. Through fine-tuning with politically aligned data, chatbots were able to generate responses consistent with specified political orientations. The study highlights the potential to steer AI chatbots' political biases.
A new study has uncovered that AI chatbots tend to display a left-of-centre political inclination. Research by David Rozado from Otago Polytechnic evaluated several chatbots, including ChatGPT and Gemini, for their political orientation.
Rozado's research revealed that fine-tuning these chatbots with modest politically aligned data could direct their responses to match specific political biases. He conducted tests to see if AI models like GPT-3.5 could be 'trained' to express varying political viewpoints. The results indicated that even minor adjustments could significantly alter the chatbots' responses.
Moreover, Rozado clarified that these findings do not imply deliberate political biases are embedded by the developers. Instead, the study sheds light on the inherent flexibility and influence of training data on AI-generated content.
(With inputs from agencies.)
- READ MORE ON:
- chatbots
- AI
- politics
- left-wing
- right-wing
- political bias
- fine-tuning
- GPT-3.5
- David Rozado
- LLMs