In ebr: electronic book review, Anna Mills, Jon Ippolito, Maha Bali, Jeremy Douglass, Mark C. Marino, Annette Vee, and Marc Watkins have published a series titled The Transformers: Imagining the Future of the Teaching of Writing. My title borrows from theirs. They raise two important and related questions for writing instructors:

  • what is the relation between writing and thought?
  • what is the importance of friction for learning?

Certainly there are many things to say about that first question. One framing they discuss is cognitive offloading. As Ippolito observes, we have been using media technologies to offload cognitive functions since the develop of symbolic behavior. That is, even the development of language arguably made it less burdensome, less energetic, for us to think. Here I find Stiegler most interesting in terms of exosomatics. Elsewhere, I have discussed our participation in “distributed deliberation,” as when we rely upon Google to tell us where to find the stuff we want. Before that, we trusted the card catalog… though perhaps there was a librarian there who could be held responsible as well.

Who is responsible for AI output? When do they become responsible? I would propose that we cannot be fully responsible for anything of which we have little conscious awareness. For example, if I take a Waymo to the grocery store and an accident happens, who would be more responsible than me? Is the thought that prompted the Waymo responsible? My interest in this question is temporal in orientation. At what point does AI output face ethical consequence?

Time is also my thought about the role of friction in learning. Learning requires delay, backtracking, error, failure, consequence. Or at least it does for humans. We can see analogs in machine learning through training and feedback. However the purpose of machine learning is to eliminate friction within its process. When we encounter frontier AI chatbot output, we encounter a process of discursive smoothing. To be clear, friction is never eliminated; it is externalized. There is heat, noise, etc., as we know.

In addition, the friction of ethical consequence is foreclosed and externalized by AI output. The heat and noise of ethical consequences is borne by those who have cognitive process able to experience response-ability for it, such us humans. As a professor, I could assign/ask students to use a reading, a piece of media, a software application, other technologies and some activity, and I could be confident about the range of experiences they would have. I would have little concern that they would cause harm. In fact, if I did, I might even include something like a trigger warning. In short, I have a theory (good or bad) about what is going to happen and a reason for asking students to do it.

The simple truth is that none of us know how using current frontier AI will shape students’ learning experiences or writing/communication practices in the long term. How could we know? That isn’t a call for a moratorium. It’s a recognition of the ethical stakes.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending