“Our intuition about the future is linear. But the reality of information technology is exponential, and that makes a profound difference.” — Ray Kurzweil
Universities were built for the slow march of history, not the ambush of exponential change. That is why AI is creating so much pressure in higher education. Institutions designed for gradual reform are now confronting a technology that improves at a pace faster than governance, pedagogy, and campus culture can comfortably absorb.
The ripple of panic that swept academia in late February — after 22-year-old engineer Advait Paliwal posted about an AI agent called “Einstein” that could enter learning management systems, complete assignments, and take tests — revealed just how unprepared higher education is for what comes next. Even after Einstein turned out to be a prank, the anxiety remained instructive. The fear was real because the premise was plausible.
AI is not just helping institutions do old things faster. It is changing the environment in which teaching, learning, advising, assessment, hiring, and credentialing happen. It is altering how learners encounter authority, build confidence, access expertise, and demonstrate value. As Marshall McLuhan put it, “The message of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.”
As Jay Akridge and David Hummels argue in Finding Equilibrium, universities now have to “reimagine what they do” because many of the skills colleges have long developed, such as basic research and coherent prose, may be devalued as AI takes over more routine intellectual work. At the same time, they argue that capacities like critical thinking and analysis may become even more important.
Neal Stephenson imagined this possibility decades ago. In The Diamond Age, the “Young Lady’s Illustrated Primer” is an intimate, adaptive learning companion that is responsive to curiosity, pace, and development.
That vision no longer feels like science fiction. Complete College America has urged institutions to integrate AI into curriculum and instruction, while advocates for competency-based education see AI enabling personalized pathways, faster feedback, and clearer demonstrations of mastery.
Stephenson’s vision also contains a warning. The Primer is not a machine for dispensing answers. It is a developmental companion. It can accelerate growth, but it cannot eliminate the need for reflection, struggle, judgment, or human guidance. That caution fits with our recent Kiosk AI blog, which argued that education depends on productive difficulty. Higher education cannot let AI turn learning into frictionless throughput. The real advantage lies in pairing intelligent tools with the kind of human mentorship that helps students make sense of what they are learning, why it matters, and how to use it well.
That is the institutional challenge in plain English: higher education cannot bolt AI onto a model designed for sorting and scarcity and pretend it has innovated. As ASU President Michael Crow put it, “We need to upgrade. We need to move out of the 19th century. We need to move out of these simplistic and, in some cases, cruel ways that people are assigned and sorted and moved forward.”
That brings us to The Cluetrain Manifesto, which adds the moral and ethical stakes that neither techno-optimism nor techno-pessimism can fully answer on their own.
Cluetrain’s enduring insight was that networked technologies should make institutions more human, not less. Markets, it said, are conversations. The people on the other side of systems are not segments, targets, or abstractions. They are human beings.
That is a powerful challenge for higher education in the age of AI.
That is the challenge higher education now faces. The same systems that promise to personalize learning can also monitor behavior, score risk, and normalize surveillance. The question is not whether colleges will use AI. The question is whether they will use it to expand learner agency or to perfect institutional control.
AI in higher education is exciting for exactly the same reasons it is unsettling. If McLuhan teaches us that a new medium reshapes human experience, and Stephenson imagines the promise of a deeply personalized learning companion, then Cluetrain supplies the ethical test: does AI help institutions engage learners in a more human voice, or reduce them to data points for prediction and control? The same technology that can support curiosity and mastery can also become a machinery of surveillance. The difference will come down to whether AI is designed to serve learner agency or institutional extraction.
That is the moral decision now hiding inside what too many institutions still describe as a technology strategy. The real choice in front of higher education is whether to build the Primer, or build the panopticon.



