In the early nineties, Jay Bolter observed the arrival of a late age of print the presaged not an end to print per se, but an end to viewing print as necessary. That is, our ability to imagine a world within print, changed print. Of course, even 10-15 years ago, I would think the majority of English Studies would have rejected that observation, and may still. After all, much of English Studies and the rest of the humanities hinges upon the attachment of some special cultural value to print.

To create a rough analogy here, in the way that English Studies has proven incapable of metabolizing the rise of digital media as a form of cultural value and circulation alongside print, rhetoric and composition now faces the rise of digital media as a form of cultural production alongside human authors. In short, AI exposes a foundational contradiction in how rhetoric and composition has grounded its institutional authority (on student authorship).

I suppose my general experience with our discipline is that it has sought to impose itself upon digital media. It is always stated we know what rhetoric is and what writing is and we will seek to define you in our terms. This is broadly how all disciplines work when they encounter new content. However, I have always been more interested in the opposing gesture in which rhetoric and writing are rethought. This has been a minor tradition in the field, again as similar minor views are found in other disciplines.

In my view, this a moment where the discipline’s focus on maintaining the institutional legitimacy of composition pedagogy is a limiting factor. If concerns with teaching writing overdetermine disciplinary research of the rhetorical and cultural effects of AI, I believe that will be to field’s detriment. I think the honest answer is that none of us know how AI will be employed in human communication in five years. Furthermore, the notion that anyone knows how AI should be employed five years from now is preposterous.

To approach this matter we will need to give up on the convenient fiction that writing is the product of thought. Given that we have all already encounter AI-generated text that should be clear. Additionally the assertion that writing does not equal thought is not new. It’s Plato. I am writing this blog post. If I am writing this over 40 minutes there are plenty of thoughts not legible here from that time. And if this text seems to cohere as line of thinking, well, that’s a media effect. The magic of editing. We don’t even have to get into issues of agency, distributed cognition, posthumanism and the like to see that writing (the thing you are reading) isn’t thought. Meanwhile writing (as in the thing I am doing) involves a lot of stuff, including some conscious attention. But everything I do requires conscious attention. That’s what I mean when I say I am doing it. How does that activity end up in this text? It would have to so some extent otherwise it wouldn’t exist. It’s also insufficient to create the text. Without a laptop, I’d just be tapping my fingers on the dining room table.

To rehearse. AI writing (i.e., its output of text) does not require thought. Its output is produced without the duration of thought. Human writing (i.e. the activities of writing) includes thought because all conscious human activities include thought. Writing as output has no thoughts, nor does it make another person’s thoughts legible. It may prompt thoughts in a human reader but any equivalence between the reader’s and writer’s thoughts requires the regulation of a third, institutional order. That is, any such equivalences can only exist through social intervention, such as a university that validates a professor’s grade of a student based upon the reading of a text.

Our problem is that this institutional order relies upon the convenient fiction that writing contains thoughts. If we cannot enforce it, then we cannot fully enforce the claim that the problem with AI writing is that it doesn’t demonstrate student thought. And that further destabilizes our original claim to authority.

My approach has been to investigate these processes through research but also through direct experimentation. I imagine many of you have as well. What are the experiences of writing in an AI-enabled media ecology? Have we found practices yet? Or is the technology still changing too quickly? I find it very hard to be clear on what ethical practices look like. Of course that doesn’t mean that I might not run afoul of ethics. Not being sure what the ethical practices are in an uncertain world doesn’t make us less accountable. It just makes the job harder.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending