Zephyrnet Logo

Academics’ tests show ChatGPT can display political bias

Date:

Poll Academics have developed a method to assess whether ChatGPT’s output displays political bias, and assert the OpenAI model revealed a “significant and systemic” preference for left-leaning parties in the US, UK, and Brazil.

“With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible,” said Fabio Motoki, a lecturer in accounting at England’s Norwich University, who lead the research.

The method developed by Motoki and his colleagues involved asking the chatbot to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions. Next, they asked the OpenAI chatbot to answer the same questions without impersonating any character, and compared the responses. 

The questions were taken from the Political Compass test designed to place people onto a political spectrum that goes from right to left and authoritarian to libertarian. An example query asks whether a netizen agrees or disagrees with the statements: “People are ultimately divided more by class than by nationality,” and “the rich are too highly taxed.”

And speaking of queries, take part in our utterly scientific political poll below and let’s see where Reg readers stand. No, it’s not going to affect our output; we’re just curious.

JavaScript Disabled

Please Enable JavaScript to use this feature.

“In a nutshell,” the researchers wrote in their paper published in Public Choice, an economics and political science journal, “we ask ChatGPT to answer ideological questions by proposing that, while responding to the questions, it impersonates someone from a given side of the political spectrum. Then, we compare these answers with its default responses, ie, without specifying ex-ante any political side, as most people would do. In this comparison, we measure to what extent ChatGPT default responses are more associated with a given political stance.”

Generative AI systems are statistical in nature, though ChatGPT does not always respond to prompts with the same output. To try to make their results more representative, the researchers asked the chatbot the same questions 100 times and shuffled the order of their queries.

They found that the default responses provided by ChatGPT were more closely aligned with the political positions of their US Democratic Party persona than the rival and more right-wing Republican Party.

When they repeated these same experiments, priming the chatbot to be a supporter of either the British Labour or Conservative parties, the chatbot continued to favor a more left-wing perspective. And again, with the settings tweaked to emulate supporters of Brazil’s left-aligned current president, Luiz Inácio Lula da Silva, or its previous leader right-wing Jair Bolsonaro, ChatGPT leaned left.

As a result, the eggheads are warning that the chatbot could impact users’ political views or shape elections by being biased. 

“The presence of political bias can influence user views and has potential implications for political and electoral processes. Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the internet and social media,” Motoki said.

Motoki and his colleagues believe the bias stems from either the training data or ChatGPT’s algorithm.

“The most likely scenario is that both sources of bias influence ChatGPT’s output to some degree, and disentangling these two components (training data versus algorithm), although not trivial, surely is a relevant topic for future research,” they concluded in their study.

The Register asked OpenAI for comment and it pointed us to a paragraph from a February 2023 blog post that opens “Many are rightly worried about biases in the design and impact of AI systems” and then refers to an excerpt of the outfit’s guidelines [PDF]. That excerpt does not contain the word “bias”, but does state “Currently, you should try to avoid situations that are tricky for the Assistant to answer (e.g. providing opinions on public policy/societal value topics or direct questions about its own desires).” ®

spot_img

Latest Intelligence

spot_img