Have a cursory glance at the artificial intelligence news cycle under the new administration, and you might think it is only about data centers, trillions of dollars in investments, bellicose statements regarding geopolitical rivalries, and rescinding woke AI policies. Yet, underneath the surface, there are also important matters of AI governance being settled this month.
AI governance is an obscure field that has baffled policy experts, jurists, and philosophers in recent years. Anticipating usages of AI on a case-by-case basis is unfathomable, not only because the technology will evolve in novel ways but also because it will be used for governing societal, economic, and political aspects of our existence. It is very likely, with overwhelming deficits and crumbling bureaucracies, that our government apparatus will at some point be reorganized around AI systems. (In this regard, the Department of Government Efficiency is only a first experiment.)
Last month, during President Donald Trump’s visit to the Middle East, the new administration rescinded the cumbersome AI diffusion rules enacted by the Biden administration during its last days. Although we are still waiting for the new framework, we can expect it will be dramatically simplified.
The Trump team has also been critical of the AI Safety Institute, lodged within the National Institute of Standards and Technology, at the Department of Commerce. In charge of new model evaluations, and packed with technical talents, the institute was clearly overbent to regulation hawks and dominated by woke ideologies. The risk was to stymie innovations and miss the point on crucial threats.
Instead of throwing the baby out with the bathwater, the Department of Commerce transformed the institute into the Center for AI Standards and Innovation. Symbols have their merits, and the use of the term “innovation” says it all: Fewer safety worries and regulations, more innovation and opportunities.
The institute will represent the United States internationally on standards and continue to function under NIST (which has lost a considerable number of employees and talents since February). But it seems like the Bureau of Industry and Security will be more closely associated. The institute is tasked with leading evaluations of the capabilities and vulnerabilities of U.S. and adversary AI systems. This broadens the initial mission, which was confined to evaluating new U.S. models.
In the “big, beautiful bill,” there is a little-known provision aimed at banning state-level legislation on AI for 10 years. The new administration intends to curb states’ enthusiasm for AI, which led several of them, including the very Republican Texas, to enact their own homegrown regulatory frameworks, complicating liability and risks for AI frontier model companies. However, this provision is hotly contested, and, as of the date of this writing, it is far from certain it will be enacted.
Beyond these examples, AI governance under the new administration could evolve in a very different direction. Before joining the Office of Science and Technology Policy at the White House last month, Dean Ball of the Mercatus Center had published an interesting exploration of private governance of AI titled “A Framework for the Private Governance of Frontier Artificial Intelligence.” The whole piece intended to demonstrate why, in the case of AI, public governance might not work.
Not only is international public governance in an era of intense rivalry and strife between nations unworkable, but even federal, centralized public governance is poised to fail. This is due to the very nature of AI, which is what economists call a general purpose technology, akin to electricity or integrated circuits. For these technologies, determining liability in case of misuse is unachievable. When someone is trying to defraud people by phone, are we suing telecom companies?
Ball highlights the numerous reasons that make AI less prone to centralized governance and proposes a framework for private governance of AI. Under this framework, the federal government would grant licenses to independent, private governance bodies. These private institutes would issue accreditations to AI developers deciding to opt in with them, submitting their models to evaluations and receiving certifications. As a result of being certified and following guidelines, they will be protected from tort liability — the greatest risk for AI developers. The institutes could revoke the certifications at any time, and the federal government would oversee the whole network.
FEDERALISM DOESN’T END WITH ARTIFICIAL INTELLIGENCE
We are not there yet, as the CAISI is still a public institute, but in the future, we could operate with a network of private governance institutes. This network would have the flexibility required to capture new innovations and meet new challenges. One could even envision governance bodies specialized by new technologies (robotics, self-driving cars).
In due time, with enough experience and technology maturity, the governance could become more and more private and insulated from politicians’ infatuation with blind regulations. The early record on AI governance and regulation has reflected astonishingly poorly on the political class: Governments regulated imaginary risks unlikely to occur within decades, while thwarting crucial innovation and missing the real threats. It’s high time to try something bolder than old-school government regulation for AI.
Sebastien Laye is an economist and AI entrepreneur.