“We’re gonna win so much,” President Donald Trump famously declared on the campaign trail in 2016, “that you may even get tired of winning.”
Typical of Trump’s over-the-top hyperbole, the White House’s recently released framework for artificial intelligence promises so much winning. And, at least in this instance, the blueprint is a triumph — mostly.
In “Winning the Race: America’s AI Action Plan,” introduced late last month, the Trump administration announced its blueprint for “usher[ing] in a new golden age of human flourishing, economic competitiveness, and national security for the American people.”

The plan’s framing sounds a clear signal about the White House’s focus: enabling and even supercharging AI development in the United States and leveraging American success in automation to secure broader domestic and international goals.
The framework correctly embraces the critical role AI will increasingly play in the global economy and shrewdly seeks to harness its best characteristics while aspiring to unshackle its progress from onerous regulation. It also wisely strives to stamp out the dark and divisive forces that occasionally animate modern machines. And it appropriately aims to retrain workers and adapt the labor force to the new AI environment.
But the plan could go further to protect creators’ intellectual property rights, and Trump’s follow-up comments augur an uncertain future for copyright. In addition, the blueprint neglects to account for the slim but nontrivial possibility that AI could go rogue, even apocalyptically so.
So, on balance, do the plan’s merits outweigh its demerits? Let’s first examine its key provisions.
Embracing AI progress
Written principally by Michael Kratsios, the assistant to the president for science and technology; Silicon Valley investor David Sacks, appointed as special adviser for AI and crypto; and national security adviser and Secretary of State Marco Rubio, the Trump AI plan focuses on three principal areas: innovation, infrastructure, and international diplomacy and security.
Regarding innovation, the framework declares that “America must have the most powerful AI systems in the world, but we must also lead the world in creative and transformative application of these systems. Achieving these goals requires the federal government to create the conditions where private-sector-led innovation can flourish.” To meet this objective, the plan aims to remove bureaucratic obstacles to machine development, protect free speech, encourage the use of open-source platforms, invest in data science, ensure interoperability of systems, and accelerate the adoption of machine technology in government, including, most prominently, the Department of Defense. Importantly, it also seeks to protect users against “synthetic media,” such as “sexually explicit, non-consensual deepfakes.”
Regarding infrastructure, the plan recognizes that “AI will require new infrastructure — factories to produce chips, data centers to run those chips, and new sources of energy to power it all.” Along these lines, it calls for expanding different federal programs and expediting various regulatory processes to juice the construction of data centers, energy projects, and improvements to the electrical grid. It also promises to train a skilled AI workforce and bolster critical cybersecurity infrastructure.

And regarding international diplomacy and security, it calls on the U.S. to “export its full AI technology stack — hardware, models, software, applications, and standards — to all countries willing to join America’s AI alliance.” Focusing primarily on American competition with China, the framework seeks to strengthen export controls and thwart Chinese influence over international regulatory bodies purporting to shape the future of machine tech, along with increasing investment in biosecurity and countermeasures to “chemical, biological, radiological, nuclear, or explosives (CBRNE) weapons.”
How do these commitments stack up? And what, if anything, may be missing from them?
“The United States needs to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field,” the introduction proclaims, “and dismantle unnecessary regulatory barriers that hinder the private sector in doing so.”
In this regard, the plan represents a welcome departure from the top-down, overly onerous executive order issued by former President Joe Biden and annulled by Trump on his first day in office, as well as the EU AI Act that came into effect last year.
The Trump administration rightly seeks to cut red tape across the entire federal government to juice AI innovation, including working with the Office of Management and Budget, the Federal Communications Commission, the Federal Trade Commission, the Department of Defense, and the Department of Commerce, to ease the glide path toward technological development. AI presents an incredible opportunity to enhance and extend life, and we would be foolish to suppress it unnecessarily.
And here, the action plan hits a home run. “The federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds,” the plan states, “but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.”
In this provision, the administration fired a justified shot across the bow of states such as California, where only a veto by the Democratic Gov. Gavin Newsom stood in the way of an even more progressive legislative measure that would have stifled AI development in Silicon Valley, its very cradle. “While well-intentioned,” Newsom said of Senate Bill 1047 in his veto message, “it does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions.” He elaborated that SB 1047 was “not informed by an empirical trajectory analysis of AI systems and capabilities.” When Gavin Newsom is the last line of defense, federal relief is highly welcome.
And so, in this first metric, the action plan succeeds mightily. We should full-throatedly and unashamedly embrace the golden opportunities presented by AI, as the White House has done.
Making AI work for workers
Of course, alongside these opportunities, the rapid development and adoption of large language models, or LLMs, promises serious turmoil, including for the future of employment. The plan allays some of these concerns by promising to “empower American workers in the age of AI,” along the lines of Vice President JD Vance’s landmark AI speech in Europe in February.
In those remarks, Vance vowed that “the Trump administration will maintain a pro-worker growth path for AI so it can be a potent tool for job creation in the United States.” Specifically, he insisted that “we refuse to view AI as a purely disruptive technology that will inevitably automate away our labor force. We believe and we will fight for policies that ensure that AI is going to make our workers more productive, and we expect that they will reap the rewards with higher wages, better benefits, and safer and more prosperous communities.”
Building on these promises, the AI action plan rightly focuses on retraining workers to improve their facility with machine learning and promoting the “next-generation manufacturing” it’s empowering.
“By accelerating productivity and creating entirely new industries,” the framework states, “AI can help America build an economy that delivers more pathways to economic opportunity for American workers. But it will also transform how work gets done across all industries and occupations, demanding a serious workforce response to help workers navigate that transition.”
Specifically, the blueprint aims to establish an “AI Workforce Research Hub” within the Department of Labor designed to evaluate the impact of AI on the labor market and channel breakthroughs toward improving worker productivity. It also promotes the establishment of pilot programs to “identify surface scalable, performance-driven strategies that help the workforce system adapt to the speed and complexity of AI-driven labor market change.” Importantly, while the federal government will play a role in jump-starting such programs, the plan seeks to ensure that they’re driven by industry. Once again, the private sector has a much stronger understanding of how best to implement improvements to the labor force than any government bureaucrat.
The framework also strives to leverage state and federal assets to educate students in AI tech. Specifically, it commits to “expand early career exposure programs and pre-apprenticeships that engage middle and high school students in priority AI infrastructure occupations.” Elsewhere, it also pledges to partner with community colleges and technical/career institutions to transition students and new workers to an AI-driver economy.
And by “next-generation manufacturing,” the White House appears to mean production of drones, autonomous consumer vehicles, robotics, and related tech with the goal of making “America and our trusted allies … world-class manufacturers of these next-generation technologies.” Retraining workers dovetails with the critical importance of this emerging sector of autonomous products.
In short, while plenty of uncertainty hovers over the future of the workforce in the age of AI, the Trump plan provides clear, simple, and compelling steps to help us reckon with the challenges presented by machine tech.
Free speech, free from bias
While the slashing of red tape and the emphasis on American innovation are steps in the right direction, the framework’s “anti-woke” provisions have drawn some of the most media attention.
“Our AI systems must be free from ideological bias and be designed to pursue objective truth rather than social engineering agendas when users seek factual information or analysis,” the plan insists in its introduction. “AI systems are becoming essential tools, profoundly shaping how Americans consume information, but these tools must also be trustworthy.”
I couldn’t agree more. As I observe in my new book on AI, both the early versions of Google’s Vision AI offering, which churned out overtly racist content, and the initial release of its Gemini program, which generated bizarre images of Asian Vikings and black Nazis, distorted reality to score doctrinal points.
Add to this troubling trend the recent emergence of “MechaHitler” from Grok, the chatbot associated with the X platform that unleashed a torrent of antisemitic content after a software update. If we allow our biases and parochial interests to infect the machines we program, we should expect to receive commensurately awful results. It’s therefore gratifying to see the administration working hard to ensure that the AI giants stamp out inappropriate and divisive programming models.
At the same time, as the plan recognizes, we must take care not to inhibit free expression. “It is essential that these systems be built from the ground up with freedom of speech and expression in mind,” the framework also states, “and that U.S. government policy does not interfere with that objective.” Balancing the exigencies of free speech against the poisoning of the public sphere has proven difficult enough in the pre-AI world, and those challenges will only multiply as AI technology matures. But they are nevertheless critical, and it’s heartening to see the White House make efforts to strike an appropriate balance.
Wrong on copyright
But while the Trump AI plan is a refreshing step in the right direction, it falls short in several areas.
First, the blueprint says little about the protection of copyright and other forms of intellectual property in the age of AI, instead merely encouraging “the U.S. government to effectively address security risks to American AI companies, talent, intellectual property, and systems.” While malign activities by foreign actors such as China to usurp the trade secrets of American companies have been legion and are rightly combated in the action plan, there is also a risk that the AI giants themselves may unfairly exploit the rights of creators. What if the call is coming from inside the building?
For several years, AI developers have faced copyright infringement litigation from newspapers, writers, visual artists, and others alarmed at how voraciously LLMs have consumed their creative content during training and how closely they’ve reproduced it when prompted.
For instance, the New York Times charged in a 2023 complaint in federal court that ChatGPT and other generative AI products “were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more.” The Gray Lady also alleged that ChatGPT “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” In other words, in training, LLMs such as ChatGPT unapologetically hoover up copyrighted data, and in generating output, they mimic the data. Writers, artists, and others have argued — in court, and outside of it — that consumers and patrons will never pay them for their creative efforts if they can obtain them for free.
Unfortunately for those creators, at an AI summit in late July, the president weighed in on the side of the AI titans in their fight against their adversaries. “You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for,” Trump said at the summit. “You just can’t do it, because it’s not doable.” The president appeared to adopt OpenAI’s viewpoint regarding the propriety, at least of training LLMs, if not necessarily in their output.
A coalition of creators, including the Screen Actors Guild-American Federation of Television and Radio Artists, the Directors Guild of America, and the Writers Guild of America, expressed disappointment at the president’s remarks and at the absence of any mention of copyright in the action plan. “Taking creators’ works without consent or payment degrades the incentive to create,” the group said in a statement. “That will harm both American culture and American leadership in AI.”
Here, while we should let the court battle over copyright play out before the executive or legislative branch gets involved, it would have been helpful for the AI action plan to present at least a general statement about the importance of copyright and the need to balance its protections against the need for innovation. Without assurances that creators’ efforts deserve appropriate protection, they are right to worry.
Doomsday needs to be accounted for
Perhaps even more importantly, the framework is surprisingly and distressingly terse on what, if anything, it intends to do to manage the significant risks posed by AI to human creativity and domestic and global safety.
As I explain in my book, we should be empowering a robust ecosystem of industry groups to develop voluntary but rigorous guidelines for how best to accommodate these challenges. The over-the-top mechanisms of the Biden executive order and the EU AI Act overshoot the mark dangerously, but it would have been nice to see the new action plan feature the work of successful groups such as the AI Alliance and the Partnership on AI, which comprise large and small companies as well as academic institutions.
I also had the opportunity to survey the “AI Doomers,” whose patron saint, Eliezer Yudkowsky, once took to the pages of Time magazine to sound the alarm about machine technology and to urge a shutdown. “Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems,” Yudkowsky wrote. “If we actually do this, we are all going to die.” He concluded his op-ed by encouraging policymakers to consider “destroy[ing] a rogue datacenter by airstrike.”
It may seem obvious, but I don’t share the doomer mentality. Nevertheless, the framework could and should have offered prescriptions for handling absolute worst-case scenarios. While I regard the risk of a civilization-ending catastrophe as very slim, it’s nevertheless nontrivial, and we should be ensuring that AI giants such as OpenAI, Google, and Meta include some form of “kill switch” in their product offerings capable of forestalling a doomsday event.
This concept isn’t merely a science-fiction conceit. In a February 2024 paper, a group of policy analysts, academics, and programmers from Harvard, Oxford, Cambridge, OpenAI, and other institutions proposed a series of protective steps that LLM developers could take to ensure against the apocalypse. “In situations where AI systems pose catastrophic risks,” they wrote, “it could be beneficial for regulators to verify that a set of AI chips are operated legitimately or to disable their operation (or a subset of it) if they violate rules. Modified AI chips may be able to support such actions, making it possible to remotely attest to a regulator that they are operating legitimately, and to cease to operate if not.”
In promoting the virtues of AI and in accounting for its risks, we must at least explore such options instead of simply ignoring the (admittedly small) possibility of doomsday.
Thus, as a whole, the ambitious, aggressive Trump AI action plan contains multitudes. (Here, I also commend a captivating panel on the plan hosted last month by my American Enterprise Institute colleagues John Bailey and Will Rinehart.)
On the one hand, it rightly promotes the vigorous development of AI capabilities for the benefit of the U.S. and the world, including by slashing bureaucratic red tape. It accounts for and seeks to empower workers in the new AI-focused economy. And it strikes a helpful balance between promoting free speech and removing both overly woke and racist venom from LLMs.
On the other hand, it comes up short on protecting intellectual property rights and preparing for the worst-case scenario, should it God forbid arise.
Still, on balance, the Trump AI action plan is an excellent place to start. Perhaps we’re not yet tired of winning after all.
Michael M. Rosen is an attorney and writer in Israel, a nonresident senior fellow at the American Enterprise Institute, and author of Like Silicon From Clay: What Ancient Jewish Wisdom Can Teach Us About AI.