GOP AI video hit on Biden raises fears of deceptive 2024 campaign ads

.

Screen Shot 2023-04-26 at 2.15.43 PM.png
Beat Biden AI-generated ad Beat Biden screenshot (Youtube)

GOP AI video hit on Biden raises fears of deceptive 2024 campaign ads

Video Embed

The GOP’s first political ad generated with the help of artificial intelligence could set a dangerous precedent for spreading false information in the 2024 election.

The Republican National Committee released a video on Tuesday that displayed what it imagined a second Biden term would look like. It featured a multitude of AI-generated images, from a Chinese invasion of Taiwan to a free-falling economy. While the talking points are nothing new for Republicans, the visceral nature of the images and their sourcing has drawn criticism from academics in the AI field, claiming that they’re creating deepfakes or artificially manipulated images involving real people.

FEDERAL AGENCIES COMMIT TO FIGHTING AI-POWERED DISCRIMINATION AND FRAUD

“There’s a danger in, anywhere across the political spectrum, using AI-generated images and videos,” Lisa Schirch, a senior research fellow with the Toda Peace Institute, told the Washington Examiner. “Because we know from the research that toxic polarization is getting worse, and it’s getting worse because political leaders again, on the Left and the Right, are using fear-based tactics.”

Schirch specializes in studying the role of technology and its influence on conflict in politics.

While the RNC has not revealed the software used to create the ad, Vahid Behzadan, an assistant professor of data and computer science at the University of New Haven, said that the software to make such images are easily purchasable or usable by non-engineers.

“The technology has become much easier to use in recent months,” Behzadan said. He also noted that it would be extremely easy to replicate images of political figureheads due to the large amount of publicly available data about them. If a user wanted to ask a large language model, the software used to power AI chatbots, to create a replica of former President Donald Trump or President Joe Biden, there would be more than enough clips of them speaking for them to make a fake speech.

A recent meme that went viral with such technology involved people creating clips of former presidents playing video games and talking about pop culture. Another user faked an image of Pope Francis wearing a white puffer jacket that went viral. Another group of users deepfaked Nicolas Cage’s face into scenes from famous films, including Raiders of the Lost Ark and Man of Steel. While the initial pictures and videos are innocuous at first glance, they show how easy it could be to make a video of a public leader stating something they didn’t.

Not all viewers will be able to tell if a video is AI-generated. “Without good digital literacy, the public can’t be expected to understand and be critical of what’s true and what’s not,” Schirch said.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

The RNC ad featured a label identifying the content as AI-generated. Labels like that are a step toward ensuring that an ad is identified as AI-generated, Schirch said, but they will not mitigate the fear-based imagery’s effect on users.

Behzadan called for passing rules that would restrict the use of AI in political campaigns, but said they would be hard to enforce. The Federal Communications Commission, which regularly manages political advertising on broadcasting, does not have rules set aside for AI-generated content yet.

© 2023 Washington Examiner

Related Content