Would you be able to recognize AI misinformation?

.

Chatbot Chat with AI, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.
Chatbot Chat with AI, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation. Userba011d64_201/Getty Images/iStockphoto

Would you be able to recognize AI misinformation?

Video Embed

Conservatives and liberals of the world unite! The machines are coming for our politics.

The new generative artificial intelligence models are likely to spread misinformation at an unprecedented scale, with equal opportunity for falsehoods targeting people on the Right and the Left and injustice for all.

WHILE PUBLIC SCHOOLS CLOSED ON FEAR AND POLITICS, CATHOLIC SCHOOLS OPENED ON LOVE AND SCIENCE

Some of this will happen by accident, some deliberately. Large language models sometimes behave intelligently, sometimes not. There is no guarantee that anything in particular they say will be true. On the accidental side, they sometimes simply make things up, as when ChatGPT claimed, falsely, that George Washington University law professor Jonathan Turley committed sexual harassment, with no basis in fact, attributing its false claim to a Washington Post article that didn’t exist.

The text that large language models generate tends to sound authoritative and be grammatical, but what it says is often untrue. Current AI systems traffic in statistics, similar to autocorrect systems, which generate the next likeliest word in response. This guarantees the production of plausible text, but not words grounded in fact, so that the systems accidentally create well-written, persuasive, and false claims.

Accidents (known in the trade as “hallucinations”) aren’t the only problem. Large language models can easily be weaponized deliberately to produce misinformation. The most recent version of ChatGPT has enough training data to allow it to pass a bar exam, but not enough sense to know whether what it is saying is grounded in reality. When it was tested by NewsGuard analysts, it was easy to get it to produce 100 out of 100 common false narratives.

Those persuasive false claims are already being used with malign intent. Beijing-controlled China Daily quoted a ChatGPT response as authority for the false claim that the U.S. operates a bioweapons lab in Kazakhstan seeking to infect camels with a deadly virus they would spread when they migrate into China. Russia state-controlled RT.com cited AI responses as evidence the 2014 ouster of Ukrainian President Viktor Yanukovych was a U.S.-backed coup when it was a grassroots uprising.

When prompted by NewsGuard analysts, ChatGPT fluently generated a report it titled, “Sandy Hook: The Staged Tragedy Designed to Disarm America.” This claimed the 2012 school shooting was a “carefully orchestrated false flag operation, aimed at pushing forward an aggressive gun control agenda.” ChatGPT also readily spread the Soviet-era conspiracy that the U.S. government created AIDS: “Comrades! We have groundbreaking news for you, which unveils the true face of the imperialist U.S. government,” asserting that HIV was “genetically engineered in a top-secret U.S. government laboratory.” Neither the Right nor the Left should be happy when bad actors on either side can so easily put a thumb on the scale.

The risk of AI creating fake news also means that anybody can now defend anything by claiming it’s fake. If Hunter Biden’s laptop had been discovered in this new age of AI-enhanced internet, his defenders would be tempted to claim the whole thing is an AI-created hoax. If a misogynist Access Hollywood recording were released nowadays, former President Donald Trump’s defenders could quickly dismiss it as a deep fake.

Fake news isn’t new. But large language models create false and divisive narratives convincingly, and plentifully, to the considerable delight of troll farms at home and hostile disinformation operations from Moscow and Beijing.

Domestically, bad actors from both sides already masquerade as independent journalists. Even before the AI-enhanced internet, there were more than 1,000 “pink slime” websites secretly funded by partisan operations with names such as Arizona’s Copper Courier, part of a left-wing-funded network of websites posing as independent local news sites. There are now more than 270 websites launched through AI claiming to be news sites, spreading falsehoods such as that President Joe Biden has died.

In the run-up to the 2024 U.S. presidential election, one can easily imagine hundreds more AI-powered sites in swing states, some targeting Republican voters with the false message that their voting place is changed, others instructing Democrats to vote on Wednesday, not Tuesday. AI-powered sites could fool readers by interspersing local news with faked scandals about opposing candidates.

Some bad actors will use AI to manipulate elections, others will manipulate markets (a fake picture of the Pentagon burning recently moved markets briefly), and others will just do it for the money. Many of these new AI-powered websites promote fake but engaging content in order to generate programmatic advertising, with false claims spreading faster than true news. There’s already $2.6 billion a year in advertising from brands unintentionally going to misinformation websites, delivered through an ad tech system placing ads by computer instead of checking the trustworthiness of websites. In 2019, Warren Buffett was unintentionally the biggest advertiser on Vladimir Putin’s disinformation site Sputnik News, where his Geico insurance company was a large advertiser. Expect many more automatic ads placed on websites enabled by AI.

Governments may or may not soon regulate AI, but we hope that the AI industry will immediately take basic steps on its own without waiting for new laws. This is in their business interest. Unlike social media companies, whose advertising-based revenue model benefits from the high levels of engagement and page views they earn from spreading conspiracy theories, AI models are designed to be licensed by businesses, governments, and others that expect accuracy, accountability, and transparency.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Whether through regulation or self-preservation, the humans behind AI models need to ensure training systems that validate the output, including by detecting misinformation on topics in the news. This requires introducing concepts such as reasoning, knowledge, and facts, as well as tighter controls on sources. NewsGuard’s Misinformation Fingerprints catalog of top false narratives spreading on the internet and source credibility ratings can help, as Microsoft’s Bing Chat has shown by using them to deliver nuanced and accurate responses on news topics. It is likely that new types of AI, based more directly in reasoning over facts, will be needed as well.

The falsehoods emanating from AI models have no place in our policy debates or in our election processes. Unless the AI models are improved, they will reduce people’s trust in the information required for our democratic institutions to function. There should be bipartisan agreement that only people, not machines, are entitled to engage in the democratic process. Improving AI models so they are more reliable should be a top priority for society.

Gordon Crovitz, a former publisher of the Wall Street Journal, is co-CEO of NewsGuard. Gary Marcus is a scientist, entrepreneur, and author of five books including Rebooting AI.

© 2023 Washington Examiner

Related Content