The political economy of artificial intelligence

.

As someone who came to artificial intelligence via economics and finance, when I first began advising and writing about technology public policies, I was struck by a paradox. At every roundtable, someone would raise the specter of overregulation stifling innovation. Yet, when I asked who was lobbying for these lighter-touch rules, the answer was always the same: the largest firms, already miles ahead. The AI revolution, like the digital revolutions before it, is not a level playing field. It is a race where the starting gun has already fired — and only a few had a head start.

The central question is no longer whether artificial intelligence will reshape our economy, but who will benefit from that transformation. The political economy of AI — how power, capital, and influence interact to shape its development — is the dimension too often ignored. If we fail to address it, we risk entrenching a new regime of algorithmic power, where the rules are set not by democratic deliberation, but by those who already own the infrastructure.

Consider the architecture of today’s AI systems. The models are data-hungry and compute-intensive, and both inputs are increasingly controlled by a handful of firms. Training a frontier model requires not just talent, but access to thousands of specialized chips and proprietary datasets. Economies of scale, combined with network effects, create a feedback loop: The more a company knows about you, the better it can serve (and monetize) you, drawing in more users, more data, more dominance. The perfect lock-in effect, even though you believe you can easily move to the next chatbot. Talk to enterprise users building with large models APIs, and they all lament high switching costs.

HARNESSING AI TO MAKE AMERICA HEALTHY AGAIN

This isn’t simply a story of technological prowess. It’s one of political structure. As Adam Thierer from R Street Institute has observed, incumbents in the AI space are not passive beneficiaries of regulation. They are often its authors. Faced with public anxiety over AI risks, lawmakers reach for rules. But in a vacuum of expertise, they turn to the very firms they aim to regulate. The result is a textbook case of regulatory capture: well-meaning guardrails that double as moats.

We have seen this movie before. In the aftermath of the 2008 financial crisis, complex regulations such as Dodd-Frank were supposed to rein in risky behavior. Instead, they entrenched the big banks, which could afford compliance departments, while smaller players were driven out or absorbed. AI may be on a different technological axis, but the underlying political economy is eerily familiar.

Some argue that government investment can correct these imbalances. But without structural reforms to procurement, transparency, and competition policy, public funds risk flowing along the same old channels. A $1 billion grant to an AI lab means something very different if only five firms are eligible to apply.

The social stakes are immense. AI systems are already used to filter job applicants, detect fraud, allocate credit, and survey public spaces. These decisions are not neutral. They reflect priorities, values, and assumptions — many of them baked into the data and unexamined by their creators. If we concentrate control over these systems, we concentrate control over the subtle mechanics of daily life.

To be clear, this is not an argument against AI or even against scale. Some problems do require vast resources. But if we want innovation that serves the many, not just the few, we must build institutions that reflect that aim. That means supporting independent audit firms, opening up data ecosystems, investing in public compute infrastructure, and designing regulation with an eye to dynamism, not incumbency.

SUPERCHARGING CHINA’S AI CAPABILITIES WOULD BE A MISTAKE

Policymakers face a profound dilemma: act too slowly, and harms proliferate; act too quickly, and they risk freezing the status quo. But the greater danger is pretending that technology is politically neutral. It never has been. And when the stakes are this high, neutrality is complicity.

AI is not just a technical system. It is a political one. If we treat it only as an engineering problem, we will get engineering solutions: efficient, optimized, and quietly inequitable. If we treat it as a political economy, we might still have a shot at something better — a future where intelligence, artificial or not, serves democracy rather than subverts it.

Sebastien Laye is an economist and AI entrepreneur

Related Content