Trump supporters attempt to woo black voters with fake AI images

Pro-Trump conservatives are making a push to win over black votes to support the former president with images generated by artificial intelligence.

Conservatives have published dozens of false images designed to deceive the public, also known as “deepfakes,” featuring black voters wearing pro-Trump paraphernalia and standing around him, according to the BBC. None of the images have been produced by the Trump campaign but are designed to promote him at the ballots. Republicans are working to win over black voters for former President Donald Trump in the 2024 elections.

The images appear to be produced by local creators. For example, an image featuring a group of black people around Trump was published by the conservative radio show host Mark Kaye.

“I’m not a photojournalist. I’m not out there taking pictures of what’s really happening. I’m a storyteller,” Kaye told the news outlet.

The image featuring Trump surrounded by black men and women at a party appears relatively realistic. However, there is an increased sheen on several people and some minute details that reveal that the image is fake. But not everyone may pick up on those features.

Kaye defended the images, noting that he was not claiming they were real. He also dismissed fears that such images could influence voters. “If anybody’s voting one way or another because of one photo they see on a Facebook page, that’s a problem with that person, not with the post itself,” Kaye said.

Another image, featuring Trump sitting on a stoop with several black men, was initially posted by a satire account but was promoted by one account with a caption claiming that it was a picture of Trump taking time to leave his motorcade to meet with black voters. The account owner, who identified as a Trump-supporting voter from Michigan, noted that the content reached “thousands of wonderful kind-hearted Christian followers.” The person declined to comment on the AI-generated nature of the image.

AI-generated misinformation has risen as a concern for lawmakers and election officials since image generators such as Stable Diffusion and DALL-E became available to the public, making it easy to generate images and deepfakes with simple prompts. While Big Tech companies have adopted policies to identify images as AI-generated, it’s unclear if those policies will be sufficient.

A lot of the interest in AI-generated content in the 2024 election arose following the controversy over a robocall in the New Hampshire primaries featuring a deepfake of President Joe Biden. An unknown party called several Democrats in January and played an AI-generated clip of Biden encouraging them not to vote. The call was later tracked to the company ElevenLabs, and the creator was identified as the magician Paul Carpenter. The clip was made within 20 minutes for less than a dollar, Carpenter said. It was also commissioned by a Democratic operative, according to Carpenter.

State and federal lawmakers are weighing new regulations to curb deceptive AI-generated media. Senate Majority Leader Chuck Schumer (D-NY) has hosted several hearings on AI and told reporters in September 2023 that he would prioritize a vote on legislation addressing AI-powered misinformation. There has been little momentum from Congress on passing relevant legislation since he said that, however.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

AI-generated content is not easy to detect. Visible identifiers that could reveal an image as AI-generated could easily be edited out, an analysis by Mozilla found, while invisible identifiers such as watermarks require additional software and can be stripped with sophisticated tools. Nevertheless, Big Tech companies such as Google and Meta have partnered up to try to provide tools for users to detect AI-generated misinformation.

While voters are in the primary season, analysts are growing worried about the technology’s effects on voter turnout in the general election. Chatbots such as ChatGPT and Google Gemini could not provide accurate information about voting, according to a study by AI Democracy Projects and Proof News.

Related articles

Share article

Latest articles