Beneath the freshly mown lawns of our universities, three successive earthquakes have opened up an intellectual abyss.
The first, Covid, showed that money mattered more than mission when universities continued to charge full tuition for an inferior online product: asynchronous lectures and “real-time seminars” where many students were just black squares on Zoom. Then, the revelation, after October 7, that universities had suffered extensive ideological capture — that much of what passes for higher education is actually indoctrination in cultural Marxism. Mobs of Ivy League students — radicalised by “critical theories” of oppression and victimisation, yet wearing Covid masks and keffiyeh face-coverings to protect their future earnings — harassed and intimidated Jewish classmates while the presidents of Harvard, Penn, and Columbia merely watched on.
The third, the most seismic of them all, is ongoing. And it is not ideology, but technology that is precipitating the greatest crisis higher education has ever faced.
A recent article by James D. Walsh in New York Magazine, widely circulated among academics, reported that “just two months after OpenAI launched ChatGPT [in 2022], a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments”. The use of Generative-AI chatbots for required coursework is, if anything, even more widespread today. At elite universities, community colleges, and everything in between, students are using AI to take notes in class, produce practice tests, write essays, analyse data, and compose computer code, among other things. A freshman seems to speak for entire cohorts of undergraduates when she admits that “we rely on it, [and] we can’t really imagine being without it”. An article in The Chronicle of Higher Education quotes multiple students who are effectively addicted to the technology, and are distressed at being unable to kick the habit — because, as an NYU senior confesses, “I know I am learning NOTHING.”
One problem is that it’s difficult to prove that students have cheated with chatbots. They’ve learned how to detect “Trojan horse” traps in assignments, engineer prompts that won’t make them look too smart, and launder their essays through multiple bot-generated iterations. Nor is AI-powered software a reliable means of detecting such schemes. (This is unsurprising: why would providers like OpenAI, which makes ChatGPT Plus free during final exams, want to imperil huge student demand for its product?) And in the long run, market forces will always keep students one step ahead of their professors. Walsh’s article highlights a young man whose attempts to monetise dishonesty got him expelled from Columbia. This former student has raised more than $5 million to design a wearable device that “will enable you to cheat on pretty much anything” in real time.
Faced with these difficulties, universities have punted. They’ve done little more than leave faculty to establish their own AI-use policies, which vary widely and are, in any case, largely unenforceable. (What is more, some professors are using chatbots to formulate assignments and grade papers. In their classrooms, machines are talking to machines.) This response is completely inadequate. Universities will not survive if they are little more than expensive diploma mills. Nor will the United States, for what will take their place in preparing future citizens, leaders, and builders to repair our broken institutions and maintain a healthy and prospering polity?
AI optimists would argue that such concerns are overblown. That’s because superintelligent machines capable of “usher[ing] in an age of flourishing the likes of which we have never seen” will soon be doing “the real thinking”, as the economist Tyler Cowen and Avital Balwit, an AI researcher, write in The Free Press. If the authors are right, students and professors need to concentrate on learning how to use AI, which will take over the management of human affairs. Yet they don’t explain what they mean by flourishing — a question begged by their admission that, by undermining our sense of purpose, radically degrading our agency, and making meaningful forms of work obsolete, AI will introduce “perhaps the most profound identity crisis humanity has ever faced”. Can human beings flourish when they’ve given up much of what makes them human?
That’s not the only problem with their predictions. In everything from diplomacy to medicine, real thinking — thinking at the highest levels, where strategies are devised and executed — requires practical wisdom: an adequate understanding, not just of the range of tools available to us and how to operate them, but of the ends these tools ought to serve. Cowen writes that, in his field of economics, he’s still better at posing questions than AI. That’s a significant admission, because the questions we ask are determined by our ends. The student who told Walsh that using ChatGPT allows her to spend so many hours on TikTok that “my eyes start hurting” asked the wrong question — How can I make more time for social media? — because she was pursuing the wrong end. She has some ability to engineer chatbot prompts — her grades are “amazing” — but doesn’t know how to employ that skill for her own benefit. And her prioritisation of vapid entertainment offers a clue about how most people whose jobs are eliminated by AI will use their leisure, a question that inspired dread in the economist John Maynard Keynes when he realised, a century ago, that “technological unemployment” would eventually deprive mankind of its “traditional purpose”.
“Real thinking requires practical wisdom”
Simply to use AI well, it seems, requires a liberal education: one that teaches what makes for a good and meaningful life, and cultivates the well-formed loves and dispositions that motivate one to act on that knowledge. Where will AI acquire the knowledge and motivation it needs reliably to use its own capacities for the benefit of human beings? From AI researchers and developers, whose knowledge and intellectual powers may not extend much beyond coding? Its current training is not significantly weighted toward a study of great books, art, music, and “the best which has been thought and said”, in Matthew Arnold’s phrase. That’s clear enough from the case of the advanced Pathways Language Model (PaLM), as described in a 2023 article. Social media conversations accounted for 50% of the dataset on which PaLM trained; webpages, Wikipedia, news, and computer code comprised 37%. Only 13% consisted of books, doubtless of greatly varying quality.
Nor are the results of Large Language Models (LLMs) like ChatGPT generated by the kind of deeply internalised judgement that liberal education aims to cultivate. LLMs respond to prompts based on complex calculations of probability across a vast, digitally-encoded reservoir of opinions: good, bad, and ugly, true and false. And as helpful as they can be in conversation with the most thoughtful users, they can’t be fully trusted to identify and communicate matters of fact, let alone to regulate themselves by an adequate understanding of the nature and conditions of human flourishing and the sources of meaning in human life. That’s clear enough from their notorious hallucinations, and from their disturbing inclination to lie — an inclination that’s presumably been strengthened by what these LLMs are learning from users who lie, to themselves as well as others.
AI can’t overcome these problems just by being, as Cowen claims, “smarter than all of us”. It would instead need to achieve real human intelligence: to comprehend qualitatively, not just quantitatively; to intuit, not merely to calculate. AI developers would object that human intuition is the product of a neural network that is not functionally different from that of AI models. This claim ignores the fact that digital computation, which “reads” information encoded in analogue signals by means of varying voltage levels, is merely a mechanical simulacrum of our native abilities. That is what makes it “artificial”. AI’s training is in any case as removed from the concrete and particular circumstances of human development, experience, and education as its software-based mind is from ours. For while our minds inhabit mortal bodies embedded in actually existing circumstances and communities, AI observes and imitates us from the ether of abstract operations whose relationship to particular pieces of hardware is purely contingent.
Cowen and Balwit reject transhumanism, which aims to make the natural or God-given condition of being human obsolete by connecting us to superintelligent AI through neural implants and other devices. They envision a middle ground where we retain our humanity under the governance of AI. That’s beginning to look like the best possible outcome of the arc of technological development over the next five to ten years. Yet it recalls the characteristically modern and impatient utopian dream of Karl Marx — a dream that relied on vague prophecies of what life would be like after the revolution, but swiftly evaporated on contact with reality. Because Communism would change the conditions of existence so radically that well-founded predictions were impossible, Marx wrote little about life in the post-revolutionary utopia. He was confident only that the violence required to bring this utopia into being would be worth it. Similarly, Cowen and Balwit admit that the advent of artificial superintelligence will do violence to human agency, meaning, and purpose. But they, too, are excessively confident that all will be well in the end.
The ultimate aim of a liberal education is fully to actualise the human capacity “to form an instinctive just estimate of things as they pass before us,” in the words of John Henry Newman. If we are to maintain our humanity in the age of AI, an education that teaches young people to read, write, and think through the investigation of traditional sources of human meaning — goodness, truth, justice, beauty — and the cultural and political conditions in which they acquire a prominent place in human life, will be more necessary than ever. A recent conversation between Ross Douthat and AI researcher Daniel Kokotajlo, a principal author of the highly pessimistic AI 2027 forecast, makes this clear. Asked about “the purpose of humanity” in a world run by superintelligent AI, Kokotajlo channeled the core of the biblical and philosophical traditions in which Western civilisation is rooted:
I guess what still matters is that my kids are good people, and that they have wisdom and virtue and things like that. So I will do my best to try to teach them those things because those things are good in themselves, rather than good for getting jobs.
Education is possible because the world at its core is good: it possesses a natural or created order that can never be fully comprehended, but to which our minds and hearts, souls and bodies, may be well or poorly attuned. AI or no AI, education at its best develops the virtues or excellences of thought and action, taste, feeling, and judgement, that fit one for all seasons, occasions, tasks and responsibilities of life. And that moral, intellectual, and spiritual attunement, not just to physical reality, nor to the largely unforeseeable contingencies of time and history, but to eternal or transcendent truths, is good in itself as well as for its consequences. This is the most fundamental teaching that the great thinkers and doers in all languages and fields of understanding — Kepler, Galileo, Newton; Al-Farabi, Maimonides, Aquinas; Hamilton, Madison, Jefferson, and Lincoln — have internalised through reading even a few great books.
If colleges and universities have any hope of surviving, they must articulate a compelling vision of what higher education is, and what it is for — of its signal importance for individuals and society alike. If they are to equip students to find their way in an increasingly complex world, they must provide not just a technical education, but a genuinely liberal one. That’s the only way they can convince students not to cheat themselves out of the chance to live rich and meaningful lives by unreflectively turning over their distinctively human energies and capacities to AI. Let’s hope they succeed. For if higher education ceases to preserve, extend, and transmit the wisdom and knowledge our ancestors struggled and suffered to achieve, who or what will?