[Let’s pretend we just had the whole postmodern, posthuman conversation about communities: inoperative, unavowable, coming, etc. If you can’t have that conversation in your head, ask an AI to help you. It can. TLDH (too long; didn’t hallucinate), community formation is complex, which means it is constructed from problems that could be productive but when approached as solvable tend to produce suffering and death. Voting for solutions is like turkeys voting for Christmas. Cognitive dissonance is life as that turkey.]
(n.b. that’s an English idiom, that’s why they aren’t voting for Thanksgiving.)
Right now, UB, like everyone else in academia, is rushing to build AI curricula. Following the marching orders from Albany, our brand of this curriculum is going to be generically called AI and Society. Or at least that’s the name we have been instructed to give to a new department we must create. At UB, it looks like this department will be comprised of mathematicians, computer scientists, engineers, and a few social scientists whose research makes use of AI, what I think we would generally call, computational social science. The plan is that this department will offer half of a number of “AI plus” majors (AI and your noun here basically) and become the hub for all AI curriculum on campus. Given their disciplinary expertise, the department’s curriculum is constructed primarily from existing introductory courses in math and CS (calc, stats, intro to CS, data structures, and so on). They also plan on offering some generic, social science introductory lectures that are new in name anyway, like an “AI and Society” class that will take a whistle-stop tour of all the solvable social problems AI is creating.
I understand that disciplines on the other side of campus are fundamentally solutionist. From a humanities perspective (or at least my perspective), they are in continual search of the final solution, which is chilling. I imagine it creates a lot of cognitive dissonance for them too. They know very well how final solutions or solutions of any kind tend to play out among humans. I can’t imagine they really believe that an AI will change that, not unless they’ve switched from being scientists into being priests. No my understanding is that they operate from the same pragmatic position as society generally does, which is that we have to try something to address our collective and individual pain as a society with problems.
So this isn’t finger-pointing at individuals, even though these AI driven university processes will likely cause a lot of suffering for many individuals. It can’t help itself. That’s what solutions do. They solve us by dissolving us. The only solution to life is death, and that’s an unsatisfactory solution from my perspective. I’d rather figure out how to live productively with that problem than try to solve it. All community solutions are just fruit from that poison tree. Unfortunately, AI and society, as we are approaching it, is an intensification of the suffering of solutionism. And when we imagine it as the foundation of the future of higher education, well… yuck. I think I just threw up in my mouth.
So that’s the pathos. Let’s talk logos.
Looking at American higher education, since the Morrill Act of 1862, the national purpose of academia has been the support and expansion of the state economy. During this period Matthew Arnold was writing the essays that would be collected in Culture and Anarchy, where he basically argues the following:
The whole scope of the essay is to recommend culture as the great help out of our present difficulties; culture being a pursuit of our total perfection by means of getting to know, on all the matters which most concern us, the best which has been thought and said in the world, and, through this knowledge, turning a stream of fresh and free thought upon our stock notions and habits, which we now follow staunchly but mechanically, vainly imagining that there is a virtue in following them staunchly which makes up for the mischief of following them mechanically.
One tension between those aims is in the value of solutionism. For Arnold, and I think still for the humanities in general, learning and the function of Culture (to the extent that we still might capitalize it) is to free us from the habits of mind in which we find ourselves miserably constrained. Conversely, the approach of the rest of academia is to find a solution to the misery of being habituated by using various forms of operant conditioning to try to associate positive human feelings with the serialized labor the nation demands of us.
For example, yes it is awful and dehumanizing to flip widgets all day for a living, but we have science and social science to provide you with antidepressants, therapy, and curated communal activities (like church, TV and getting drunk) to mitigate and redefine that experience. In short, the purpose of solutionist disciplines is to discover ways to keep humans alive and productive.
The arts and humanities are about trying to figure out how and why to live “well,” whatever that means. Turning that question into something productive of a life one can value rather than trying to find a solution to your life’s problems that you can painfully apply to yourself and others is what makes our sector different from the rest of the university. So there’s a conflict there.
In short, to enforce a solutionist curriculum of AI and Society on the arts and humanities is to reject them entirely. They are logically incompatible.
So that’s logos. How about ethos? Lol, don’t shutter that lamp yet Diogenes. So far I’ve mostly been writing about the conceptual elements and their disciplinary implications. Basically the research problem is that excluding the arts and humanities from this conversation is a good way to get us all killed.
But then there’s the credit-generating motive, which is the gold-standard motive of all universities. Like any businesses, we are in the business of making money. That means giving the customers a product they can understand and value. That’s what these looper degrees are about. Yes, these students desire well-paying jobs making the important expert decisions in concert with the AI systems that will populate every profession in the near future. We are in the business of making money, and so are the students. Who would blame any of us?
While they don’t want to be entrained, students recognize that becoming an expert looper requires extensive entrainment to concretize new habituated cognitive behaviors in them. The trained habits make them valuable, much like a cadaver-sniffing dog. The important distinction here is that these humans don’t actually want to face situations filled with unknowns where they have to make decisions. They want to be part of decision making systems. They want to be like the slave boy in Meno. They don’t want to have to invent a right decision; they want to be able to recall one from their training. Fortunately the AIs can help them with that because they will always know the looper’s job better than the looper does. The AIs just aren’t allowed to do the job because they don’t have thumbs I guess.
As unsavory as that sounds, it is more desirable for them than dealing with the openness of thought without solutions in the humanities. In the technocratic world, solutions may be painful and miserable but at least they exist. For most people, it would be unlivable to exist in a world without the possibility of cures and solutions, even if the current batch suck.
So like all institutions, we hook into that human psychic frailty and make hay. Looper degrees: because we can survive without solutions. As an academic pluralist at heart, I’m fine with other people doing things on campus that I don’t agree with, even when they seem creepy. That’s what academic freedom requires, at least for me. No kink shaming here. I just don’t want be injured or killed by whatever passes as a good time for them.
Anyway, that’s well over 1000 words and I’ve got other miserable things to do, as I’m sure you do as well.
BTW, you’ve got to love how the AI just desperately grasped for the one concrete noun in this post and ran with it hard for the image.





Leave a comment