Conservatives warn of political bias in AI chatbots
Christopher Hutton
Video Embed
The viral chatbot ChatGPT has been accused of harboring biases against conservatives, leading to a larger conversation about how artificial intelligence is trained.
The AI-powered chatbot ChatGPT went viral in December after users discovered that it could recreate school-level essays. Users quickly moved to test its capabilities, including its political propensities. A number of conservative personalities ran tests with political talking points on ChatGPT to see how it responded. For example, Sen. Ted Cruz (R-TX) tweeted a comparative test in which the AI declined to write positively about him but did so for dead Cuban dictator Fidel Castro.
“The tech is both amazing and limited and should ultimately be treated as a compliment, not a substitute for organic research done by individuals,” James Czerniawski, a senior policy analyst for the libertarian think tank Americans for Prosperity, told the Washington Examiner. “We talk about the potential for bias in AI plenty — it always comes down to the simple concept of what it draws from for the inputs.”
Chaya Raichik, the creator of the Libs of TikTok Twitter account, made similar tests and found that the bot was unwilling to praise Daily Wire founder Ben Shapiro but would do so for former CNN host Brian Stelter.
Reporters from the National Review and Washington Times attempted multiple tests to determine if the software’s responses revealed any predispositions toward Republican or Democratic political talking points. The two outlets claimed that the software is biased toward the Left.
“This has always been a problem of AI,” John Bailey, a fellow at the American Enterprise Institute, told the Washington Examiner. Bailey noted that AI has reflected biases over race, gender, and geography in the past and that much of this is due to what data were used to train the program. This has also forced programmers to counter the biases through supplementary data and response restrictions.
The chatbot’s output is primarily based on what is put into it. ChatGPT, like many other artificial intelligence programs, was fed and trained by its designer OpenAI on an extensive data set to inform its understanding of the world, Bailey said. The program then used this understanding to answer relevant questions or attempt to make an answer that resembles the truth. OpenAI has not released specific details about the data set it used to program, but the AI was trained to avoid things such as slurs or political speech.
The responses posted may also depend on the wording. Users regularly post about their tests with the software on the r/ChatGPT subreddit and found that similar prompts may reveal completely different responses. This randomness often makes it hard to determine if the software is biased or if these are merely based on the prompts presented.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
OpenAI founder Sam Altman acknowledged the software’s limits. “We know that ChatGPT has shortcomings around bias and are working to improve it,” the startup founder said on Feb. 1. He also stated that the company was “working to improve the default settings to be more neutral, and also to empower users to get our systems to behave in accordance with their individual preferences within broad bounds.” It remains unclear what those updates to improve neutrality will entail, but the company’s software will likely grow significantly after receiving a $10 billion investment from Microsoft.