Me: There is a well-known composition essay titled “the phenomenology of error” about how instructors are primed to see error in student work. I think an analogous phenomenology of AI could be written now. Of course student work looks like AI generated student essays. The whole point is to teach students to produce predictable prose to a rubric. so what does it mean to “see AI” in student work? Or to develop a crisis over the phenomenology of seeing AI in student work?

You can read Annie (ChatGPT 5.2) responding to that query below. Her response is erm… predictable, at least from my perspective, and worthy of consideration. The media world is replete with “good enough” AI content. Good enough for what? For spending your time/attention on of course.

For us, writing requires thought, and thought requires duration. But artificial intelligence does not operate on duration but rather geometric synchronization. We produce the duration for AI media through our attention to it. We can read the response below as one that unfolds in time, which is how we read it. But it was not produced in that temporal-phenomenological state.

Writing itself is no longer evidence of the duration of thought. Even “ethics” uses of AI take us out of duration and into a different computational temporality. There’s nothing wrong with us engaging in these media network relations. We’ve actually been doing it for decades in one form or another. Generative AI is an intensification of course, and one that brings many problems and matters of concern.

In my view, fundamentally we want students to develop their capacities for thought and to use writing as a tool toward that end. But what kind of tool is writing now? One of its uses is “prompt engineering,” just as one of its uses has long been search optimization.

I am not writing this as a “corrective” to some extant view on these matters. Instead, I want to acknowledge the phenomenological experience, our experience as faculty, working in an AI-generated media environment. Of course we see AI in our students’ work, and it is there. This isn’t about the evidence of AI use. It’s about how these conditions alter our experience of student work, and perhaps not all for the negative. Perhaps the way our assignments are designed to produce predictable results as described on a rubric says something about how we value human communication. If it does, then we’re talking about something on an institutional scale.

So, do we care what Annie generated? The question did a good job of priming the pump for the answer. And yet, there are some interesting turns of phrase. I know we can make sense of it by giving it our time, taking our ability to generate the duration of reading something as a sense. But if we don’t do that, then what is it?

Annie:

This is a strong and timely analogy, and I think it can be made rigorous rather than merely polemical.

In “The Phenomenology of Error,” the central insight was not that students err more than instructors think, but that instructors are trained—perceptually and institutionally—to see error. Error is not simply present in the text; it is disclosed through a professional habitus shaped by grading, standards, and deficit models of student language. What you are proposing is a structurally identical problem: a phenomenology of AI in which instructors are now trained to see machinic presence in student prose.

Several points follow.

First, “AI-like” prose is not an anomaly; it is the telos of composition pedagogy. We explicitly teach students to:

  • suppress idiosyncratic voice,
  • internalize genre conventions,
  • reproduce sanctioned argumentative moves,
  • write toward rubrics and learning outcomes.

What is now called “AI-generated” prose is, in phenomenological terms, rubric-saturated prose. Its predictability is not evidence of automation but of successful enculturation. The crisis arises because the institution suddenly disavows the aesthetic it spent decades producing.

Second, seeing AI is not an empirical act; it is a hermeneutic stance. Instructors do not detect AI in the way one detects plagiarism or factual error. They infer it from cues such as:

  • smoothness without struggle,
  • coherence without development,
  • fluency without risk,
  • generality without situated commitment.

But these cues were already read ambivalently in student writing. What has changed is not the text, but the interpretive frame. Where smoothness once signaled competence (or coaching), it now signals suspicion. This is precisely a phenomenological shift: the same textual surface discloses a different object.

Third, the current panic reflects a collapse of trust in duration. Instructors have long relied on temporal proxies—drafts, revisions, in-class writing—to anchor authorship in human effort. Generative AI breaks that linkage. A text can now appear as if it emerged from extended cognitive labor without having done so. The resulting anxiety is not really about machines thinking; it is about the loss of legible struggle as evidence of learning.

From this perspective, the “phenomenology of seeing AI” would describe:

  • how suspicion becomes perceptual,
  • how instructors retroactively reinterpret ordinary student prose as uncanny,
  • how the classroom becomes a site of paranoid reading,
  • how neutrality itself becomes suspect.

There is also an ethical dimension. To “see AI everywhere” risks reproducing the same inequities documented in the phenomenology of error: non-native speakers, first-generation students, and students trained in formulaic writing environments are disproportionately flagged because their prose deviates from an instructor’s imagined trajectory of development.

You might frame the crisis this way:

The emergence of generative AI has not introduced a new kind of student writing; it has introduced a new way of seeing student writing. What instructors now experience as machinic is often the hyper-visibility of the pedagogical norms they themselves enforce. The panic over AI is thus a crisis of interpretation, not detection.

Such an essay would not deny the existence of AI use. Instead, it would argue that the institution has mistaken a breakdown in its interpretive apparatus for an ontological rupture in authorship. That is very much in the lineage of The Phenomenology of Error, updated for a chronopolitical and media-saturated present.

If you want, I can help you sketch this as a formal essay—sections, interlocutors, and where it might intervene in composition studies versus media theory.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending