We need an Operation Warp Speed for AI

.

A plethora of analysts currently believe that a significant threshold in artificial intelligence models will be reached before the decade’s end. The most optimistic among them identify late 2027 or 2028 as pivotal, forecasting the emergence of artificial general intelligence during the Trump administration

Such predictions immediately encounter the challenge of clearly defining AGI. Most policy analysts adopt what I describe as a teleological or goal-oriented definition: an AGI is an AI system capable of performing the majority of human-level cognitive tasks, including those typical of a PhD-level researcher or a genius in specialized fields. This definition explicitly excludes physical or sensorimotor activities, thus encompassing primarily tasks executable in a digital environment.

Given the elusive nature of the AGI concept and the difficulty of establishing universal benchmarks, notable industry figures, such as Anthropic CEO Dario Amodei, have resorted to broader descriptors such as “powerful AI” or “advanced AI.” Personally, I find merit in the economists’ concept of radically transformative AI, which emphasizes AI systems producing wide-ranging economic implications.

More recently, policy analysts have suggested concentrating initially on a preliminary phase of AGI, aimed at fully automating machine learning research and development. This stage envisions an AI system capable of accelerating AI R&D productivity tenfold. Once such acceleration is achieved, fully automated AI coders or AI research scientists would follow rapidly, demonstrating sufficient generalization capabilities to realize comprehensive AGI across various domains of knowledge.

Significant speculation has arisen regarding whether the U.S. government should spearhead future projects dedicated to building and acquiring advanced AI or maintain a more detached, regulatory role. Geopolitical considerations, particularly the intense competition with China — recently analyzed by Bill Drexel — have inspired analogies to historical endeavors such as the Manhattan Project or the Apollo space missions. 

These references underscore a fundamentally different operational model. Whereas private enterprises were indeed integral to uranium enrichment or lunar module development, the U.S. government meticulously planned, scheduled, and coordinated these projects, seldom resorting to off-the-shelf components. 

Currently, no equivalent centralized oversight or orchestration exists in the race toward AGI. Although entities such as the Defense Advanced Research Projects Agency, the network of national laboratories, the recently rebranded Center for AI Standards and Innovation, and the National Institute of Standards and Technology maintain AI-focused R&D programs, the most prominent AI researchers have predominantly migrated to the private sector, specifically to a select group of commercial labs based in Silicon Valley, supplemented by affiliated teams in London and Paris. The talent pool has become so concentrated that nearly all leading research teams trace their origins back to two pioneering entities: Google DeepMind and OpenAI.

Hence, the current race is between Silicon Valley and China, but on the Chinese side, the government is heavily involved, directly overseeing research. In the United States, after the Creating Helpful Incentives to Produce Semiconductors and Science Act was passed during the Biden administration, the second Trump administration is focused on promoting the U.S. AI stack and technology, developing compute power and data centers domestically, establishing standards, and creating the best environment possible for AI Labs. 

Still, our government is not directly or massively involved in research or the drudgery of model creation. The weird race between an authoritarian, all-powerful Chinese Communist Party and a handful of Silicon Valley private labs poses the question of security. Silicon Valley does not have the background in counterespionage that these circumstances require. It is still learning here, but will need some help from D.C. 

If we envision more involvement from the U.S. government, what form should it take, and is it really desirable? 

The Institute for AI Policy and Strategy recently published The US Government’s Role in Advanced AI Development: Predictions and Scenarios by Bill Anderson-Samways and Oscar Delaney. The institute also conducted a forecasting workshop on the subject with six professional forecasters and five AI public policy experts. They used the aforementioned version of an advanced AI, an automated machine learning researcher, compressing 10 years of progress in one year, and called it AIR-10.

The forecasting exercise highlighted significant divergences in how experts perceive the ideal and practical roles of the U.S. government. On one hand, historical analogies to the Manhattan Project and Apollo Program underscore the benefits of robust government leadership in critical technological endeavors, notably in managing strategic coordination, ensuring security standards, and achieving rapid technological advancement. Experts noted the advantages of direct governmental involvement in orchestrating and financing AI R&D efforts, particularly given the intense geopolitical rivalry with China, which heavily directs and controls its AI initiatives.

On the other hand, a clear consensus emerged around the recognition of current realities in AI research. Unlike the centralized, government-led efforts of past technological races, today’s advanced AI development is primarily driven by private companies. The forecasting session revealed significant skepticism regarding the feasibility of a large-scale, direct governmental management approach akin to Cold War models, given the specialized, dispersed nature of AI talent and the entrenched private-sector leadership.

An important finding from the exercise was the critical gap in security and counterespionage within private-sector AI development. While Silicon Valley leads technologically, it lacks the robust security protocols necessary to counteract espionage threats, particularly from nation-state actors such as China. This vulnerability suggests a clear, necessary role for the U.S. government as a strategic security partner: providing essential protections, guidelines, and regulatory frameworks to safeguard technological developments.

This forecasting exercise confirmed that a Manhattan Project 2.0 was not feasible and not even in the cards, given the current administration’s view of the government’s role in the economy. However, I would advocate a project similar to Operation Warp Speed. 

POPE LEO XIV WARNS YOUTH ABOUT DANGERS OF AI

Operation Warp Speed was a public-private partnership initiated by the United States government to facilitate and accelerate the development, manufacturing, and distribution of COVID-19 vaccines, therapeutics, and diagnostics. It yielded RNA vaccines in one year. Given the current lack of coordination between private labs and the vagueness of definitions around “powerful AI” in the industry, the government could set clear definitions, objectives, benchmarks and tests for AIR-10, fund a few moonshot R&D projects in public laboratories and start ups, coordinate the effort in terms of compute power and infrastructure, ensure security and protection from hostile foreign entities, set up partnerships with allies, and manage achievements and public disclosures.

If the U.S. government implements these practices, I am certain that in one year, with the right coordination and effort, AIR-10 would be achieved.

Sebastien Laye is an economist and AI entrepreneur.

Related Content