Senators introduce bipartisan bill to purge child sexual abuse from AI training data

.

EXCLUSIVE — Sens. John Cornyn (R-TX) and Andy Kim (D-NJ) introduced bipartisan legislation Tuesday aimed at stopping artificial intelligence systems from being used to generate child sexual abuse material by requiring developers to screen training data for illicit images.

The Preventing Recurring Online Abuse of Children Through Intentional Vetting of Artificial Intelligence Data Act, or PROACTIV AI Data Act, would direct the National Institute of Standards and Technology to issue voluntary best practices for AI developers to identify and remove known child sexual abuse material, or CSAM, from their training datasets.

Sen. John Cornyn, R-Texas, is surrounded by reporters as he heads to the chamber during a test vote to begin debate on a border security bill, at the Capitol in Washington, Wednesday, Feb. 7, 2024. (AP Photo/J. Scott Applewhite)
Sen. John Cornyn (R-TX) is surrounded by reporters as he heads to the chamber during a test vote to begin debate on a border security bill on Wednesday, Feb. 7, 2024, at the Capitol in Washington. (AP Photo/J. Scott Applewhite)

“Modern predators are exploiting advances in AI to develop new AI-generated child sexual abuse material,” Cornyn said. “This legislation would mitigate the risk of AI platforms unintentionally enabling the creation of new content.”

Kim added that the bill represents a chance for Congress and the tech sector to “implement the necessary safeguards to keep our children safe from future misuse or exploitation.”

Sen. Andy Kim (D-NJ) speaks during a hearing of the Senate Committee on Homeland Security and Governmental Affairs on Thursday, April 3, 2025, on Capitol Hill in Washington. (AP Photo/Mark Schiefelbein)

The bill would also provide legal protections to companies that comply with the new guidelines and act in good faith, as well as funding for new research on automated CSAM detection methods.

A recent Stanford University study identified more than 3,000 suspected CSAM entries in the LAION-5B dataset, which is commonly used to train leading AI image generators. The National Center for Missing and Exploited Children said nearly half a million reports of AI-generated CSAM were made in the first half of 2025, up from fewer than 70,000 in all of 2024.

The legislation follows President Donald Trump’s signing of the Take It Down Act, a bill championed by first lady Melania Trump to crack down on AI-generated revenge porn and deepfakes, in May.

The PROACTIV AI Data Act follows a growing bipartisan push in Congress to address the rise of AI-generated child exploitation, including efforts from lawmakers such as Sens. Josh Hawley (R-MO), Lindsey Graham (R-SC), and Richard Blumenthal (D-CT), who have championed bills targeting deepfakes and tech platform liability.

TRUMP POSTS AI VIDEO OF OBAMA’S ARREST BY FBI AGENTS

Lawmakers have also revived support for broader legislation, such as the EARN IT Act, which would hold tech companies liable for failing to remove CSAM, and the DEEPFAKES Accountability Act, aimed at curbing malicious synthetic media.

Recent House and Senate hearings have spotlighted the surge in AI-generated abuse, with lawmakers from both parties calling for clearer standards and stronger tools to detect harmful content before it spreads online.

Related Content