A new study has revealed that ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI, is exhibiting a rightward shift in political responses.
Chinese researchers from Peking University examined how different models of ChatGPT responded to political questions over time. Their study, published in Humanities and Social Science Communications, involved asking ChatGPT 62 questions from the Political Compass Test—an online tool that assesses political alignment. Each question was repeated over 3,000 times per model to track changes in responses.
While ChatGPT still leans towards the “libertarian left,” newer models like GPT-3.5 and GPT-4 showed a noticeable rightward shift. The researchers believe this shift is significant, given the widespread use of AI chatbots and their influence on public opinion.
This study builds on previous research from the Massachusetts Institute of Technology (MIT) and the UK’s Centre for Policy Studies, both of which identified a left-leaning bias in large language models (LLMs). However, those studies did not examine how AI responses evolved over time.
The researchers propose three potential reasons for ChatGPT’s political shift: modifications to training data, interactions with users, and updates to the chatbot. Since AI models adapt based on user feedback, the shift could reflect broader societal trends.
Experts warn that unchecked AI bias may lead to “skewed information,” potentially reinforcing political polarisation. To counteract this, researchers recommend ongoing audits and transparency reports to ensure AI-generated responses remain fair and balanced.