At the end of the last semester, I participated in a university workshop focused on large language model, generative AI (ChatGPT and other text generators in particular) and pedagogy. A lot of the concern was with the capacity to generate code and the implications for introductory programming courses. But not only that. The potential for using this technology for all kinds of undergraduate writing is obvious.
As we might expect there are already detection services though that has the potential to create an arms race. After all, beating fake detectors is part of the process of building better generators. And it is all too likely that bias will continue to plague these mechanisms. It would be unrealistic to imagine that these detectors will work as effectively as plagiarism detection does.
Of course we weren’t solely focused on the academic dishonesty question. We also discussed how to teach students the effective use of large language models in various professions in business, computer science, engineering, etc. In Media Study we too are interested in teaching students to interact with these new media generation tools. However, we kept coming back to the concern that these tools might encourage some students to not engage with the cognitive work of introductory courses, which will leave them underprepared later.
One way of splitting writing pedagogy in two is “writing to learn” and “learning to write.” A lot of college writing uses the activity of writing as a method for developing ideas and understanding. I.e., it’s not just “writing up” the results of an experiment. That’s writing to learn, writing as a means of learning. Of course students also “learn to write” in various genres and for different purposes and audiences. And yes, the two are intertwined. If all we were doing was the latter, then these tools would be fine, but we’re never just doing that. Even if we imagine a lab report as a writing up of results, that report is still the primary evidence of a student’s learning experience. Writing assignments in large lecture courses will clearly be an issue. 200 students in a class posting short responses in a course management system that are evaluated by TAs and graders? It won’t be easy. We can go back to in-class exams or try to develop clever questions. There are plenty of folks out there with advice.
The invention of literacy coincides with the development of schools: literacy needs to be taught. Mass literacy and public schooling coincide.
There is an understandable confusion most humans have that conflates experience (phenomenology) with language and language with writing. The result is that we confuse our experience of reading a text with the experience of writing it. I.e., we imagine writing as a limited form of telepathy. We hold people accountable for our interpretation of the things they say or write. We also give them credit for it and recognize their writing as their property (to some degree).
Personally I’m glad that my moral character is judged more on my ability to write sentences than hang picture frames, but I’m not sure why.
And speaking of juries, I’d say the jury is still out on whether or not symbolic behavior is an evolutionary cul de sac. Maybe instead of imagining some all powerful being saddling us with the responsibility of naming the beasts (as if that’s something the beasts wanted), we could look at the whole of human history as a poorly planned tea party, the sad experience of standing around with a bunch of self-important numpties while regretting our life choices (which is my version of the Burkean parlor).
Some tips (or how I learned to stop worrying and love the bomb)
These tips are wholly untested. Don’t spend them all in one place.
- Stop creating prompts/assignments. One of the bigger challenges (right now) for endusers of gen AI is figuring out how to give it good instructions, evaluate the output, and then modify it with additional instructions. Don’t do the hard part for your students. I’d just say “write something interesting that is relevant to this class.” I wouldn’t even give a length requirement.
- Don’t read or respond to student writing yourself. That’s obviously what AI is for. If a student can’t produce something with AI that is interesting enough that I actually want to read it then it needs to go back to the AI drawing board.
- Have students use AI feedback. It’s easy enough to get feedback from an AI. I’ve been posting AI feedback as comments on my posts for the last month or so. As an enduser what do you do with those comments? Of course you can feed them back to the AI: e.g., “Use the attached comments to revise this text.”
- Simple two-step grading system: first, establish a baseline. If you produce something I want to read, your baseline is an A. If you don’t, it’s a C. Then conduct an oral exam in the classroom. Pick some random line from a student’s work and ask them to explain what it means and what their rhetorical reasoning was for writing that. Bad answers drop grades a full letter. A good answer can move a C to a B or hold on to the A grade.
(BTW, I’ve taken to copy/pasting my text into ChatGPT and asking for an image. In a previous version of this I asked it for something in a comic book style and I feel like it wants to keep doing that. So I asked it not to do the and it gave me this.






Leave a comment