
AI comes to political campaigns
Jessica Melugin
Video Embed
As artificial intelligence tools make their way into a growing number of sectors in the economy, concerns are growing about AI’s role in political campaigns.
While campaigns have always contained some false, exaggerated, or misleading claims in their rhetoric and advertising, some election watchers worry that AI technologies will supercharge those dangers. In a recent poll by Axios-Morning Consult, 53% of survey respondents agreed that misinformation spread by AI will alter the results of the 2024 presidential election.
RISKY BUSINESS: HOW HUNTER BIDEN’S FOREIGN DEALINGS COULD COST HIM ANOTHER INDICTMENT
Under pressure from left-leaning advocacy groups like Public Citizen, the Federal Election Commission is seeking public input on AI-created content — specifically the altered video or audio known as “deepfakes” — in political advertisements. But because of a vacancy on the commission and an evenly divided partisan split between the three Democratic and three Republican commissioners, there is no consensus about the body’s authority to make any rules around AI without explicit direction from Congress.
Congress, meanwhile, is struggling to find agreement on whether or how to regulate AI in general, including the use of the technology in elections. Although a group of Democratic lawmakers have introduced legislation mandating disclaimers in political ads using AI-generated images, the measure has not attracted support from Republican colleagues. At the state level, Minnesota and Washington have already passed laws addressing deepfakes in political ads; California and Texas are considering similar legislation.
In practice, the 2024 presidential campaign has already waded into the murky waters of AI-generated content. Most notably, Gov. Ron DeSantis’s (R-FL) campaign released a video in June that interspersed real video of former President Donald Trump with images believed to be AI-generated. Those images show Trump hugging the former head of the National Institute of Allergy and Infectious Diseases, Anthony Fauci, with the text “Real Life Trump” transposed on top.
Those concerned that AI use around elections will further erode voters’ confidence in the political process cite the above incident as an example of how government regulatory efforts are lagging behind the real-world use of the technology.
Industry leaders from the top seven AI companies gathered at the White House this summer to discuss such dangers, and all pledged a commitment to prioritizing safety in general terms. Perhaps most pertinent to election concerns, the representatives from Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI agreed to increase efforts on detection tools for AI-generated content, such as the use of watermarking on created images or videos. But skeptics view the voluntary nature of those agreements as a fatal flaw.
In the political campaign industry itself, there are notes of optimism that the introduction of AI tools may level the playing field. Small or underfunded campaigns without access to the vast resources available to insiders might especially benefit from the increasing use of AI tools to shape and evaluate the effectiveness of their messages to voters.
But the decreased cost of producing campaign materials, such as AI’s ease of converting text into images, comes with its own concerns.
In a statement released at the filing of public comment with the FEC, Public Citizen President Robert Weissman said, “artificial intelligence poses a clear and present threat to our democracy.” He added that this could leave “voters completely at a loss to determine what’s real from what’s fake, an impossible circumstance for a functioning democracy.”
Democratic strategist Michael Meehan, CEO of Squared Communications and a veteran of campaigns from the presidential to local level, told the Washington Examiner, “We are concerned, like with all the technology advances in the last quarter century, about what bad actors do with (AI) in their hands.”
Meehan said his firm has no immediate plans to employ the technology in its campaigns and is still “working through strategies on how to defend against aspects of potential use by our opponents.”
“I am encouraged by Google’s announcement to require declaration of use of AI in paid advertising, but the paid ads are only one part. The viral nature of the unpaid is much harder to get our arms around,” said Meehan.
The Washington Post recently reported that, despite ChatGPT’s own restrictions, the generative AI provider still allows campaigns to create content targeting specific voter blocs. ChatGPT’s usage policies page forbids the use of their models to generate “campaign materials personalized to or targeted at specific demographics.” But the Washington Post was able to prompt the model to obtain those types of results and fretted in the article that this “could also open a new era in disinformation, making it faster and cheaper to spread targeted political falsehoods.”
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Senate committee hearings about regulating AI in various contexts are ongoing.
“Hopefully, government can come together to have a commonsense set of rules,” said Meehan when asked about those congressional efforts. “But, in my experience, government is always years behind in its ability to keep up with the lightning pace of technology advances.”