The latest coding designed to identify artificial intelligence-generated images is insufficient to combat the sharing of deceptive images of politicians and public figures in the 2024 elections, according to a new report.
The report, released by the software company Mozilla on Monday, assessed the reliability of seven tools intended to identify AI through a series of tests and academic reviews. The AI identifiers were divided into “human-facing disclosure methods,” such as visible labels on images, and “machine-readable methods,” such as invisible changes to an image’s code, known as “watermarks.” The researchers found that the tools fell short of what was needed to counter the sharing of AI-generated images designed to deceive the public, also known as “deepfakes.”
“When it comes to identifying [AI-generated images], we’re at a glass half full, glass
half empty moment,” Mozilla research lead Ramak Molavi Vasse’i said in a statement sent to the Washington Examiner. “Current watermarking and labeling technologies show promise and ingenuity, particularly when used together. Still, they’re not enough to effectively counter the dangers of undisclosed synthetic content — especially amid dozens of elections around the world.”
Election officials and lawmakers have warned that deepfakes could cause mischief in elections, particularly in light of events like the New Hampshire robocall that used a fake copy of Biden’s voice to encourage voters not to vote for him in the primaries. Such creations could also be used for scams or harassment. Some in government have advocated the adoption of tools including labels and watermarks to help users identify AI-generated content. But Monday’s report is the latest indication that such measures are insufficient to keep up with the technology.
Human-facing disclosure methods such as visible labels or audio labels on an AI-generated product were found to be “poor,” according to Molavi Vasse’i. The tools are vulnerable to manipulation by malicious actors to distort or hide the label from the public eye, the researcher said.
Watermarking technology, such as modifications to an image’s coding, its metadata, or the frequency of audio, was found to be a “fair” option for helping with AI detection, Molavi Vasse’i concluded, but the tagging technology is reliant on the existence of “robust, unbiased, and reliable detection mechanisms.” Users would need to have easily accessible AI-detection software that could reliably identify the varying types of watermarks for the technology to be effective.
Molavi Vasse’i recommended that lawmakers pass legislation requiring AI-generated images to have watermarks installed and adopting a “multifaceted approach that combines technological, regulatory, and educational measures” to mitigate the risks presented by AI-generated images.
Some academics who study AI are skeptical of watermarking technology’s reliability when it comes to identifying deepfakes. Soheil Feizi, an associate professor of computer science at the University of Maryland, released a study in October in which his research team successfully stripped the vast majority of watermarks from AI-generated images through simple techniques.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Malicious actors affiliated with adversarial countries, such as China or Iran, could easily strip AI watermarking from pictures and videos created by AI. Those same actors could also “inject some signal into real images so that those watermarking detectors will detect those images as watermarked images,” Feizi told the Washington Examiner.
Big Tech companies like Meta and OpenAI have partnered to promote voluntary commitments to combat AI-generated misinformation in elections. Twenty technology companies announced on Feb. 16 that they were forming the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” a set of commitments to create tools for identifying AI-generated images. The company-driven efforts to combat AI-generated misinformation have arisen, while Congress has struggled to enact legislation.