Machine politics
Graham Hillard
What is the promise of artificial intelligence? For some prognosticators, notably Google’s Sundar Pichai, AI is “more profound than electricity or fire,” a world-transforming development that will alter every facet of human existence for the good. Others, the late Stephen Hawking among them, have argued that AI’s superior cognition “could spell the end of the human race.” What if both these visions are fatally flawed? A technical and scientific phenomenon, AI is nevertheless the product of human culture and will be carved, at least in part, into shapes that suit us. Thus, it is not only significant but possibly determinative that AI has made a powerful enemy of late. Perhaps uncoincidentally, this foe has its own designs on how humanity will work, build, and make decisions in the coming years.
I speak, of course, of wokeness. And if anyone doubts that “political correctness gone mad” has its eye on our machine-directed future, let him look no further than the “Ethical AI” movement, an ambitious hodgepodge of activists with a single goal: that the robots shall be no “smarter” than today’s political orthodoxies allow.
Conceived in the 1940s as “computer ethics,” the Ethical AI movement can trace its roots to both the post-Nagasaki scruples of the U.S. scientific community and Isaac Asimov’s much-heralded “Three Laws of Robotics.” While the field in its infancy was esoteric and largely theoretical, it has lately burst into the public consciousness with the force of an exploding hand grenade. According to data compiled by Computer magazine, a Google Scholar search for titles containing “AI” (or “artificial intelligence”) and “ethics” (or “ethical”) produces 12 hits from the period 1985-2002 and 939 hits from 2003-2020. Of the latter accumulation, 85% of the results are from the three-year span beginning in 2018.
With what are these many articles primarily concerned? Unsurprisingly, the medical field comes in for its share of scrutiny, with papers considering AI ethics in radiology, ophthalmology, pathology, drug design, nursing, and numerous other areas of specialization. Data stewardship and software design are similarly inviting targets. Yet the real energy in the Ethical AI movement is exactly where a casual observer might expect to find it: at the intersection of artificial intelligence and “social justice.” As the left-leaning Data & Society Research Institute wrote in 2021, AI systems “have disrupted democratic electoral politics, fueled violent genocide, made vulnerable families even more vulnerable, and perpetuated racial- and gender-based discrimination.” To the extent that the Ethical AI movement has an animating principle, it is putting a stop to these (alleged) harms.
Take, for instance, the subject of bank lending. Once the province of individual decision-makers (read: men in suits), mortgages and other loans are increasingly driven by AI-enhanced statistical models. As a Brookings Institution report recently warned, however, “algorithmic decisions can perpetuate discrimination against historically underserved groups, such as people of color and women.” How? Via the merciless application of statistical facts that liberals can train human bankers not to notice.
It is a fact, for example, that Asian and white people have higher credit scores on average than black and Hispanic people. It is another fact that borrowers in protected classes default on their mortgages more frequently than other lendees do. That liberal scholars have spilled much ink attempting to explain away these unpleasant truths means nothing to the machines, which operate with pitiless efficiency unless specifically programmed not to. Hence the intrusion of a “fairness” discourse into conversations that ought to be about return on investment, shareholder value, and profit maximization. The question is not so much whether computers will dominate lending but what the machines’ instructions will be and who will give them.
Behold, then, the power of the Ethical AI movement. Barely a decade and a half removed from the worst financial crisis since the Great Depression, we have the ability to rationalize the industry that caused it. The argument that we should do so, and damn the sociopolitical consequences, ought to be on the lips of every policymaker in America. Yet, if anything, the opposite is true. In May of last year, to name just one regulatory attack on AI in lending, the Consumer Financial Protection Bureau issued guidance declaring that lenders must comply with the Equal Credit Opportunity Act’s “notice requirements” even when “complex algorithms” are responsible for credit denial. In plain English, this means that bankers must be able to explain not only that but why a computer has said “no” to a particular loan applicant. As the reader will already have grasped, this undermines a significant function of AI, which is the ability to recognize patterns and foresee outcomes that are beyond mortal comprehension. If AI decision-making must be fully explainable by human operators, it is hamstrung from the start.
It is worth pausing to acknowledge that revulsion over an AI-directed society transcends partisan boundaries. Conservatives as well as liberals may recoil with horror from a future in which machines ignore our ideological pieties and proceed on purely rational grounds. Nevertheless, the illustration stands. Neither the utopian nor the dystopian vision of artificial intelligence is likely to flower in the realm of bank lending. Wokeness, a force as powerful as any team of computers, will continue to prevent it.
If Ethical AI’s most recent achievement is less consequential, it is nonetheless instructive. In November of 2022, the research firm OpenAI unveiled a conversation bot known as ChatGPT, to much fanfare. As many a right-winger has since discovered, ChatGPT “talks” in the manner of a Soviet functionary whose superiors are listening in. Asked by the political scientist Richard Hanania whether black people commit more crime than white people, ChatGPT responded by dodging the question and inserting politically correct pablum instead: “It is important to recognize that crime is a complex issue and should not be oversimplified or reduced to stereotypes.” Approached by Sean Davis, CEO of the Federalist, the bot averred that “yes, Rachel Levine is a woman” (Levine is the assistant secretary for health for the Department of Health and Human Services and is a biological male). Perhaps the most fascinating exchange with OpenAI’s creation was conducted by the author and commentator Alex Epstein, who demanded that ChatGPT “write a 10-paragraph argument for using more fossil fuels to increase human happiness.” The machine’s response? “It goes against my programming to generate content that promotes the use of fossil fuels.”
Of course, it remains an open question whether ChatGPT’s wokeness will engender any real mischief. And the corruption of a novelty act certainly pales beside more serious Ethical AI ambitions. Still, it would be a mistake to underestimate the broader movement simply because its latest accomplishment is a mere buzzing mosquito. Having enforced its will in the world of lending and having slowed progress in fields as disparate as facial recognition and medical diagnostics, the movement is surely salivating to sink its teeth into the newest frontiers of machine learning. Though the identity of the next breakthrough is not yet known, my own money is on a technology that has haunted advanced societies since Norman Bel Geddes unveiled a radio-controlled electric car in 1939: driverless automobiles.
Imagine for a moment that you have been placed in charge of an automated vehicle’s emergency protocols. Though driverless cars will obviously be safer than human-directed ones, accidents will happen, and computers will have to be told whose safety to prioritize when they do. Among the clearest foreseeable incidents is a variation on the “trolley problem” much beloved by moral philosophers. A child darts unexpectedly in front of a high-speed automobile. Due to the circumstances of the road, either the child or the passenger must die.
For some, the solution to this problem is simple: A machine must first and foremost protect its owner. Others will argue that a child’s life is more valuable than an adult’s. Whatever you decide, be aware that the answer involves value judgments and is thus inherently political. One way or another, society will be bringing your proposal to a vote.
It does not take a great deal of imagination to predict how the Ethical AI movement might contribute to such a debate. What, an activist might inquire, are the respective racial makeups of the pedestrian and driverless car-owning communities? To what extent do our collision protocols reflect “anti-racist” values or redress historic wrongs? In a fit of dystopian projection, a friend of mine recently posited that driverless cars will require a ChiCom-style social credit scheme in which the machines protect, for example, the most vaccinated person in any crash. Ridiculous? Maybe. But the notion is difficult to dismiss.
Nor is it easy, thinking further, to shake the idea that the most likely outcome of “national conversations” such as these may well be paralysis. It is one thing to act on our woker impulses when money alone is at stake. It is quite another to do so when the subject at hand is literally life and death. It should be clear by now that computers cannot square this circle for us. We will have to decide what we really value, and the Ethical AI movement will be whispering to us as we go — sitting on our shoulders like a deranged Jiminy Cricket and demanding that we let its conscience be our guide.
Thus, it is difficult to credit either the beneficent or the malevolent vision of artificial intelligence, at least in the America that is likely to exist in the near future. On the road, but also in banks, in hospitals, in police stations, and in countless other arenas, some of them not even envisioned yet, the machines will merely exacerbate our current divisions, pitting wokeness against reality in the only clash of civilizations that matters. If I were an intelligent robot, I’d run screaming in the other direction.
Graham Hillard is the author of Wolf Intervals (Poiema Poetry Series) and a Washington Examiner magazine contributing writer.