The American Federation of Teachers (AFT) — America’s second‑largest teachers union — has teamed up with Microsoft, OpenAI, and Anthropic to launch a $23 million AI Teaching Academy in New York City. Its stated goal: Train teachers to better use large language models (LLMs) in classrooms nationwide in preparation for their further implementation in the workforce.
This initiative isn’t just about adopting a powerful tool and making it accessible. It’s a signal that the field arguably the most vulnerable to disruption, is embracing the very technology that threatens its core. In many cases, it is not speculative: it already has.
Stanley Kubrick famously challenged viewers to learn to love the atom bomb in his 1964 film Dr. Strangelove. Now society is being asked to embrace LLMs in a very similar way, though not as satirical. Reformers continually pushed for STEM and AI integration in schools. Now, they risk sidelining the most fundamental purpose of liberal education: strengthening judgment through sustained intellectual work.
Tech leaders such as Sam Altman and others in the AI industry maintain that their models can revolutionize education — automating essay writing, research, and even exam preparation. According to OpenAI’s Chris Lehane, these are the pillars of future education: Reading and writing and arithmetic and learning how to use A.I.
Conspicuously absent from this vision is any mention of judgment, the cultivation of virtues, or even a critical look at the careerist obsession with turning education into mere job training.
AFT President Randi Weingarten emphasizes that this initiative isn’t about replacing teachers. Rather, it’s about equipping them with “tools and ethical frameworks” while ensuring “human beings, not the machine, are in charge of education.”
However, when AI is easily available, it becomes seductive: shortcuts become commonplace, and the labor of learning can be dismissed as obsolete.
That labor is more than curricular filler. It is where a student’s mind strengthens, faculty of judgment thrives, and students become active citizens and custodians of knowledge. STEM electives steadily outnumber traditional humanities courses, and screen-driven distraction already chips away at attention spans. AI threatens to accelerate both tendencies at a marked rate with monumental downside.
It is for this reason that AI should remain in elective spaces and taught by those with real expertise. It should not indiscriminately weave itself into every classroom at every grade level. AI proficiency already seems like a pending prerequisite for the modern working world.
However, it should become available as a skill only once students demonstrate the responsibility to wield it.
In the meantime, the foundational acts of learning — reading, writing, thinking — should remain anchored in pen and paper, where concentration and effort count. This means lectures with minimal reliance on multimedia, the return of blue-book exams, and emphasis on physical copies of books that provide low levels of stimulation.
This also means that phones must stay off in the classroom. And what a great riddance it would be.
AI’s output ultimately depends on the quality of mind behind the input. Some will inevitably master that skill. Most, however, rely instead on rote prompts and could benefit from a fundamentals course that would at least teach them how to engage in inquiry with a LLM.
Instead of rushing teachers into AI proficiency, we should start by scaling back administrative burdens so that educators can devote more energy to what they do best: guiding inquiry, debating truth, building moral and intellectual character.
WE NEED AN OPERATION WARPSPEED FOR AI
The tech companies funding this academy talk about “ethical frameworks.” But frameworks alone won’t instill judgment. An AI academy with no wholesome ethos wouldn’t fix this problem, either.
If we allow AI to shortcut those processes, we risk losing more than work ethic. We may just jeopardize the very capacities of reason and judgment that sustain a free and liberal society.