President Donald Trump has done a commendable job embracing American leadership in artificial intelligence. His proposal for a national AI framework wisely recognizes that a patchwork of regulations by 50 different states will stymie AI progress and cede leadership to China.
That’s why it’s deeply concerning to see reports that the president is considering imposing onerous pre-approval requirements on the technology.
This week, the New York Times reported that Trump is mulling government veto power over new AI models before they are publicly released. This approach of insisting on bureaucrats’ approval has held back promising medicines at the Food and Drug Administration, and it will also hold back promising AI applications. Trump should rethink his reversal and reject red tape.
TRUMP SAYS AI WILL PROBABLY KILL JOBS BUT ALSO ‘CREATE A LOT OF JOBS’
Unfortunately, pre-approval schemes for game-changing products are nothing new. Before a medication can come to market, the FDA must ensure the product’s safety and efficacy through well-controlled clinical trials. While this sounds like a reasonable system, in practice, it gives some of the most risk-averse people on the planet far too much power over Americans’ lives.
Writing on the FDA’s many problems in 2007, Hoover Institution scholars Henry I. Miller and David Henderson pointed out, “too often a regulator unfamiliar with a new technology is a fearful regulator; and a fearful regulator tends to: (1) doggedly apply old paradigms to novel situations; (2) slow down every phase of clinical development; and (3) require unnecessary testing, in order to provide himself with cover should anything go wrong.”
This is just as true today at the FDA — if not more so — as this observation was made nearly 20 years ago.
As Chemical & Engineering News noted in January, “2025 saw 46 new molecular entities cross the finish line to approval by the U.S. Food and Drug Administration — that’s four fewer than the 50 drugs the agency approved in 2024.” The agency is approving about as many drugs now as it did 10 years ago, even though the drug development pipeline is far more crowded.
The biotechnology data platform RxDataLab notes that the number of active “investigational new drugs” currently being tested increased from around 11,000 annually pre-COVID to more than 14,000 from 2022 onward. The FDA should be responding to this astronomical increase by launching Operation Warp Speed-style initiatives, not dragging its feet.
As the Wall Street Journal’s Allysia Finley recently pointed out, these problems are exacerbated by a failure in leadership. Current FDA head Marty Makary has turned the agency into “a soap opera, with real lives hanging in the balance. Start with the FDA’s arbitrary rejections of rare-disease and cancer drugs, which have spurred an outcry from patient groups and physicians. In each case, the FDA reversed prior guidance and contrived technical pretexts to deny access to a life-saving drug.” Finley lists as examples Replimune Group’s melanoma treatment, which “could save 2,500 lives each year,” and “a gene therapy by UniQure for the brutal neurodegenerative Huntington’s Disease, which slowed progression by 75% in a clinical trial.” She could have easily listed many more.
As the Taxpayers Protection Alliance noted in its recent report, the FDA’s approach to promising medications such as Ebvallo, ONS-5010, High-Dose Spinraza, Hetlioz, and Gefapixant reflects continued risk aversion that harms consumers.
IN FOCUS: UNDERSTANDING THE PENTAGON’S PUSH TO BECOME AN ‘AI-FIRST FIGHTING FORCE’
This approach is bad enough as is and would be an absolute train wreck if applied to AI models. Bureaucrats could deny AI services capable of saving countless patients’ lives on the hunch that they might deliver bad or harmful information down the line. A risk-averse agency may similarly deny any AI service or model capable of giving professional advice for fear of upsetting occupational licensing boards or industry groups.
The FDA has shown that pre-approval systems often hurt the people they are trying to help and impose high costs on taxpayers and consumers. The stakes are simply too high to shackle AI to this failed regulatory system.
Ross Marchand is the executive director of the Taxpayers Protection Alliance.
