One aspect of our conversation is the role that generative AI should play in faculty pedagogical labor. In Brightspace, as you may know, in the discussion section, you can get AI to design questions aligned to Bloom’s taxonomy (ugh!) which you can insert for your students to answer. AI built into CMS is a good way for a university to track faculty use of AI. (n.b.)

I think we have mixed feelings about such AI use.

Meanwhile I find myself teaching 80+ students this semester: 50-ish in a lecture and 30-ish asynch online. By my calculations I have roughly 75 minutes per semester to devote to each student. E.g. if I spend 600 minutes (5.5 min/student) that’s 10 hours a week in addition to teaching and class prep. If I am envisioning teaching as 1/3 of my job then I’m already setting myself up for a 50 hour work week, as a plan. At some point, the other aspects of my job begin to suffer.

And that only gets each student 5 minutes per week. That’s roughly enough 1) to scan what they write weekly and provide some global responses 2) to review their major assignments and provide a cursory response. I’m on the wrong side of the lever here. Adding more of my labor does very little to alter individual student experience.

So should I use curated AI to respond to students? If the result is a product I agree with and find appropriate, and if the result, in my estimation, offers more value than I could produce in the time I have?

I would say no. I think both my colleagues and students would agree. But for different reasons. If students want AI output they don’ need to come to me. And that would be telling if all AI output was the same, but it never is. So one question is, does the AI output have more value because a faculty member has thought about it before sending it on?

And if thinking about a text isn’t the way a text acquires value, then what is? If a faculty member can’t establish the value of AI generated text, then how can they establish the value of student generated text?

So I don’t think those kinds of arguments against AI use really work at this level. My reason for not employing AI in this fashion is that I don’t think it would be worth the institutional hassle that would result. Universities that once prided themselves on their thoughtful engagement with the world have increasingly become institutions that encourage (and design) predictable engagement with the world.

The university’s predictable mission is its chronopolitical dimension. As the higher education industry, along with the rest of the economy, shifts into a new mode of anticipatory capitalism. What is often described as surveillance, platform, or financialized capitalism can be more fundamentally understood as an anticipatory capitalism: a regime in which value extraction depends on the computational production of futures prior to their lived realization.

When this is the space of cultural value, what possible value can accrue to the lived experience of learning? There is not even time for learning to occur, after all the learning outcomes were predicted and anticipated. As universities must respond at the speed of markets they have foresworn deliberation except as a performative, post-hoc justification.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending