The politics of artificial intelligence are quickly becoming a contest over regulation, with lawmakers and agencies competing to demonstrate vigilance rather than focusing on enabling innovation.
The Senate’s decision to strip a decadelong federal moratorium on state AI rules from the One Big Beautiful Bill Act was a case in point. In one day, Washington, D.C., abandoned a pause that might have prevented a patchwork of 50 AI codes. The vote was to remove the moratorium provision after a late campaign against it; whatever one thinks of preemption, the “snafu” has left firms facing a thicket of overlapping mandates.
Two economic lessons follow.
First, regulatory fragmentation is a tax on experimentation. The internet’s ascent owed much to a permissive, light-touch posture — anchored by policy choices like Section 230 and a general reluctance to pre-clear new services. That climate did not eliminate harm, but it lowered the fixed costs of entry so innovators could discover value before being strangled by procedure.
The political economy of AI is now tilting the other way. Adam Thierer from R Street Institute has long contrasted a “permissionless innovation” ethos with precautionary default, arguing that unless a technology poses a concrete, near-term threat, experimentation should proceed while targeted law handles specific abuses. The state-by-state turn rejects that logic, replacing it with omni-purpose frameworks and broad compliance duties.
The costs are already visible. The National Conference of State Legislatures reported that all 50 states introduced AI-related legislation this year, and roughly 100 measures have been adopted or enacted, with many more circulating. The velocity matters as much as the volume: rapid, divergent enactments force companies to build the most restrictive rule set into every product by default, or to carve the U.S. market into balkanized releases. That is a de facto national regulation arrived at by accident rather than design.
Colorado’s 2024 AI Act illustrates the hazard. Modeled in spirit on the European Union’s risk-based approach, it imposes duties on “developers” and “deployers” of high-risk systems to prevent “algorithmic discrimination,” with disclosures to the attorney general and public-facing statements — rules scheduled to bite in 2026 and now the subject of repeal, delay, or narrowing efforts. Even supporters have conceded that implementation has been difficult. Firms operating nationally — banks, insurers, and health systems — must either reengineer models for Colorado or pull back functionality for everyone. This is not an argument for zero rules but a warning about poorly scoped ones.
The AI mental-health bans in Nevada and Illinois are a second caution. Nevada’s AB 406 and Illinois’s HB 1806 purport to protect patients and students by forbidding chatbots to represent themselves as providers and by restricting AI-mediated therapy. But the statutes are written so broadly that they may chill benign tools (screening, triage, and crisis signposting) and drive real usage underground, where neither quality assurance nor monitoring is possible. They also erect classic professional-licensing moats, protecting incumbents at the expense of low-income users who might prefer some help to none.
Second, overbroad rules squander a decisive American advantage. Private U.S. AI investment remains an order of magnitude larger than China’s by standard measures. The Stanford AI Index estimated about $400 billion in U.S. private AI funding in 2024, nearly 10x that of China. These flows are not abstract. They finance computing, power, and talent, which are compounded into durable productivity gains. Impeding them with a constantly shifting compliance frontier is a self-inflicted competitiveness loss.
Europe provides a cautionary foil. GDPR aimed at real problems, but often missed the margin where innovation is born. Multiple studies associate it with reduced app entry and increased concentration. The EU’s new AI Act repeats the template: sweeping obligations up front, fines to the horizon, and significant compliance fixed costs. States importing a Brussels-style posture risk reproducing Brussels-style stagnation.
What would a sane federal approach look like?
• Preempt the patchwork while using existing regulators. The United States does not need a new AI super-agency; the Federal Register already lists on the order of 441 federal and quasi-federal units. Consumer protection, civil rights, safety, and sectoral rules can be enforced against AI-mediated conduct by the agencies that already understand their domains.
• Target capability and scale, not buzzwords. For frontier models, we just need to require public safety and security protocols and prompt reporting of critical incidents for the handful of entities training the largest models, coupled with whistleblower protections. For everyone else, rely on outcome-based enforcement under long-standing laws.
• Prefer narrow fixes to omnibus bills. If deepfakes in elections, model-weight theft, or autodial fraud are the harms, legislate those issues with precise remedies and sunsets. Resist the urge to define “AI systems” so broadly that the law sweeps in ordinary software and analytics.
• Mind geopolitics, not just governance. Export controls on high-end AI chips, tightened under both administrations, serve clear security goals, but they also induce substitution and domestic capacity-building by rivals. When Washington, D.C., simultaneously fragments the U.S. market with state rules, it weakens the very firms we expect to carry the national interest. Security and competitiveness are complements, not substitutes.
POWELL SIGNALS RATE CUTS ARE COMING
When lawmakers reflexively seek expansive, technology-wide statutes, they distribute those costs to entrepreneurs and consumers who cannot lobby their way around them. When Congress punts to the states on matters that are inherently national — cloud, compute, and models — it signals that symbolism counts for more than substance.
The moratorium’s collapse exposed a genuine tension on the right between a pro-innovation coalition favoring national rules of the road and a populist federalism that distrusts preemption. But America’s AI economy does not run on red and blue markets. It runs on scale. The cheapest path to “safe, secure, and trustworthy” AI is not to multiply compliance regimes. It is to let builders build under clear, uniform, and enforceable constraints that punish real harms and otherwise get out of the way.
Sebastien Laye is an economist and AI entrepreneur.