I know neoliberal is one of those words, and honestly, it’s not a required distinction as there isn’t a set of non-neoliberal universities. We all live in the market we live in, the same market as the rest of the US and the world. These changes aren’t new. In fact excellent (pun intended) material examining academic capitalism was written 40 years ago. But the neoliberal university did really accelerate with the 2008 recession and the ensuing rise of algorithmic capitalism.

That’s old news. But it is news that makes the higher education adoption of AI familiar news. The final chapter of my last book is all about this. I look primarily through the lens of the Horizon Report produced by the New Media Consortium and then later EDUCAUSE as symptomatic of the role of educational technology over the last 15-20 years.

The first thing to consider about AI is its adoption into university operations: the stuff that isn’t research or teaching. At UB we are deploying SalesForce but that’s just one of several platforms that collects data on user interactions, and AI becomes a way of distilling this data. After all, AI is fundamentally dimensional reduction: the collapsing of millions or more data points/dimensions into a single line such as a sentence or a piece of code. Student, faculty, and staff interactions with any aspect of a university’s digital platforms are data waiting for harvest, analysis, prediction, and enaction.

The market imperative for this adoption is clear. Universities must appear as “cutting edge.” None can appear to be the university that “doesn’t get AI” or is behind the times. For university administrators the personal risk-benefit is clear. They don’t get punished for following the market. Furthermore, the money, such as there is money, is in AI adoption. As an administrator, if you want to move forward in your career you need to put your name on shiny new things. AI is shiny, Mr. Magpie.

So I understand their motives for printing out all the “now with AI” stickers and slapping them on every surface.

But what is the actual motive for creating AI+ degrees and/or general education AI literacy? Hold that thought. Basically one of two things is going to happen over the next five years. 1) The AI hype bubble will burst. We will still have these usable tools but that’s it. In that case, an AI+ degree looks foolish. 2) AI will develop to be genuinely useful across society in which case it will be integrated into every discipline anyway. In that case, an AI+ degree may have short term value, but it will look increasingly like a degree in programming your VCR. And what is the second act? An AI+ Phd? Lol.

The actual motives are easily understood from the perspective of the short-term market advantage. But that’s the institution’s advantage. Not the students’ or the faculty’s (at least not collectively; I’m sure some individuals will do well by this).

None of that is to say that we shouldn’t be researching or using AI at universities. Absolutely we should. The question is how? I’m not sure. But the answer can’t be as quickly as possible. Because that doesn’t make sense beyond the precepts of market advantage. It’s not like we are in a position of having to say that we should “start studying AI” because AI started in universities. We’ve been studying it all along. I’ve been studying AI and machine intelligence my entire career.

So what are we moving quickly to do then? It would appear we are moving quickly to expose ourselves to frontier AI. And even when universities move to build their own AIs, they are participating in the same AI architecture, the same AI technoculture as the rest of the market.

With that in mind, here’s some advice from Annie about how to read university communications in the age of AI.

When a university uses Salesforce, many of the emails, texts, and alerts students or faculty receive are generated or coordinated through an automated system rather than sent personally by an individual. These messages are part of the university’s “engagement” and data-tracking infrastructure, which is designed to ensure that communication is timely, personalized, and measurable. For example, a reminder about registration or advising might be sent automatically when certain conditions in your record are met, or an AI tool might help draft or schedule a message using information drawn from your profile. The system’s goal is to streamline communication and support, but it also means that some messages are standardized or algorithmically personalized. Understanding this can help recipients read these messages critically—recognizing that they may blend human and automated input, and that they reflect institutional priorities for efficiency, consistency, and data tracking as much as individual outreach.

At this point, I think it is legitimate to read any general messaging from the university as AI slop, even if a human has touched it (i.e., been in the loop). Beyond that, messages from individual administrators would likely be edited, revised, “massaged” by AI features, like the co-pilot built directly into our Office products. If they aren’t, if they find such uses of AI unethical, then the curricular embrace of AI would be truly Kafka-esque. Even I can’t imagine that level of absurdity. Besides it doesn’t conform to market forces.

BTW, this is kind of what “the medium is the message” means. It doesn’t matter what the university’s mission, values, or strategic plan might be. The curricular content doesn’t matter. The medium of the AI+University program is dimensional reduction for short-term market advantage.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending