ChatGPT developer OpenAI released a series of new tools to help users avoid election misinformation, seeking to allay fears about the role that artificial intelligence might play in shaping the 2024 election.
OpenAI published a blog post on Monday detailing several initiatives it would take in 2024 to combat artificial intellligence-generated misinformation globally. The initiatives include the incorporation of “credentials” that will allow voters to identify “deepfakes,” which are images designed to deceive the public and manipulate their vote. They are also partnering with national organizations to provide reliable election information and update their policies so that campaigns cannot use ChatGPT to campaign and bar ChatGPT-powered chatbots from pretending to be real people or government entities.
“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” OpenAI wrote.
All images generated by OpenAI’s DALL-E 3 image generator will now be encoded with provenance information, identifying the image’s origin, who made it, and when it was made. This information will be coded with encryption developed by the Coalition for Content Provenance and Authenticity, a group founded by Adobe, Microsoft, Intel, and other technology companies.
A new tool developed by OpenAI will accompany this encoding that users can use to identify if an image has been generated by DALL-E.
ChatGPT’s usage policy now bars campaigns from making chatbots to promote their campaign. It also bars the imitation of individual figures and government institutions and forbids applications that attempt to discourage users from voting through false information.
The bot will also partner with the National Association of Secretaries of State to provide up-to-date information on how to vote and where to do so. NASS is considered the “nation’s oldest, nonpartisan professional organization for public officials,” according to its website. It hosts several resources for how to vote in the United States.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Congress and regulators have been discussing how to regulate the effects that AI might have on voters. Sens. Amy Klobuchar (D-MN), Josh Hawley (R-MO), Chris Coons (D-DE), and Susan Collins (R-ME) introduced legislation in September 2023 that would restrict deceptive AI content in elections. Senate Majority Leader Chuck Schumer (D-NY) stated that he intended to push legislation in early 2024 to establish guidelines for AI-powered misinformation.
The Federal Election Commission is considering regulations for AI-generated content as well.