There is nothing a public policy analyst enjoys more than a good analogy. Artificial intelligence literature is replete with them, provided regularly by think tank experts, industry luminaries — Anthropic’s Dario Amodei’s “Machines of Loving Grace,” referencing poet Richard Brautigan, remains my favorite — and philosophers of diverse perspectives. Recently, former Google CEO Eric Schmidt introduced another intriguing concept in his comprehensive paper “Superintelligence Strategy”: MAIM, or mutually assured AI malfunction.
The acronym is a reference to the Cold War era and the Rand-coined MAD, short for mutually assured destruction. Such evocative acronyms, reminiscent of nuclear tensions, are undeniably effective at capturing the public imagination. Schmidt’s parallel innuendo here is that as we were on the brink of utter devastation 50 years ago during the U.S.-USSR nuclear race, we are in the same predicament today as we are caught in the midst of the U.S.-China AI race. Although I do not fully subscribe to this view, believing firmly that the United States can and should lead, albeit with China closely trailing, many economists seem convinced that the race will have no winner.
In a paper titled “The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating,” Corin Katzke and Gideon Futerman used game theory to investigate the dynamics of international AI competition. Their conclusion was that such a race would be even more dangerous for international stability than the nuclear race as it would heighten the risks of great power conflict, loss of control of AI systems, and the undermining of liberal democracy.
This is also what Schmidt postulates in his paper, highlighting the differences between the nuclear race with the risk of an AI malfunction and even of preemptive strikes by one of the two rivals. Like the Cold War’s MAD doctrine, which hinged on the devastating cost of nuclear conflict deterring aggression, MAIM could function as a deterrent equilibrium or incentivize international collaboration in AI.
MAIM proposes that large-scale attempts by any single nation to achieve unilateral dominance in artificial intelligence capabilities will inevitably invite retaliatory sabotage from rival states. We can envision several forms of actions, ranging from covert cyberattacks degrading AI training processes to more direct physical assaults on data centers, underscoring the fragility inherent in maintaining unilateral AI projects aimed at strategic monopoly. Besides, any advancement would not guarantee final victory, as Schmidt highlights: “If, in a hurried bid for superiority, one state inadvertently loses control of its AI, it jeopardizes the security of all states.” Thus, the assurance of general mayhem underpins a precarious yet possibly stable deterrence regime.
During the Cold War, then-Defense Secretary Robert McNamara famously asserted, “The indefinite combination of human fallibility and nuclear weapons will lead to the destruction of nations.” The AI era repurposes this axiom: The indefinite combination of AI’s power and the risk of human miscalculation mandates strategic restraint.
The MAIM regime thus requires clearly communicated escalation thresholds, transparency into data centers, and strategic positioning of infrastructure away from civilian areas. Just as the Cold War produced doctrines to manage nuclear weapons, MAIM would demand rigorous international frameworks and vigilant oversight to prevent catastrophic miscalculations and to preserve the balance of power amid rapid technological advancement.
I have my reservations with the Schmidt paper and his MAIM doctrine, as I have never fully believed in the analogy with the nuclear weapon and the Manhattan Project. Nevertheless, MAIM offers a valuable conceptual scaffold for navigating the imminent era of advanced AI systems, anticipated around 2028.
CRITICS OF VIRGINIA AI BILL CLAIM ECONOMY IS AT RISK AND URGE YOUNGKIN TO VETO
After this transitional period, regardless of which nation first achieves significant breakthroughs, a new international entente will be vital in averting global disorder. Policymakers would thus benefit from considering MAIM seriously, leveraging it as a cornerstone for future international agreements.
In the meantime, this analogy with the Cold War is both too antiquated and speculative. We are still a few years away from strikingly intelligent and general AI systems.
Sebastien Laye is an economist and AI entrepreneur.