There has been some press over the Conference on College Composition and Communication’s resolution on the right of students and faculty to refuse rather than use AI.
It appears to have struck a nerve.
We have see and heard much about AI use. We have also seen and heard much about AI refuse, or “slop.” In this context AI “use” and AI “refuse” are often indistinguishable. And when we establish means for sorting use from refuse, how do we do that? These are the challenges of hallucination and alignment. There are other features of AI refuse beyond the slop and other cultural-material externalities. As there is refuse as garbage there is the action of refusing (the two words have an odd relationship to the same Middle French word according to the OED). I am thinking particularly of the epistemic and ethical consequence of refusal as integral to algorithmic descent. What does the AI refuse to consider? What occurs as a result of AI refusing to recognize ethical consequence, of being incapable of being otherwise?
However, it may still be premature to focus on refuse or refusal. Perhaps we should still be inquiring as to the nature culture of AItm?
The dominant argument in our culture (and in higher ed) is that AI is primarily computational and mathematical. Thus expertise in these professional-disciplinary discourses comes first. This claim can be refused. As such, the refusal of AI may be the refusal of this argument. What else might AI be?
- Cultural: I’ve made this argument before. These corporations took possession of the massive digital archive of human expression of used it to create AI’s. When we use AI in any way it has cultural effects of many kinds. In short, culture goes in and culture comes out, what kind of machine is it? A “math” machine? Maybe not.
- Historical: yes, these categories overlap. AI as math-computation occludes history in the presentism of operation. We can go back to that 1956 summer shindig in AI or Turing or Lovelace & Babbage or the abacus or clay tokens in clay envelopes. We had the first Science Wars over STEM’s rejection of the cultural and historical study of science and technology. We can (and should, imo) refuse an AI that denies its histories.
- Mediated: my view includes refusing to teach a version of AI that denies that it is media. This is all over my blog, so in the words of Marvel comics, ’nuff said.
- Rhetorical: see above.
In short if AI is something that someone is trying to persuade me to use on my computer, how can it not be rhetorical and mediated?
As Latour observes, the Modern aim of science is to purify its knowledge of cultural, historical, rhetorical, and mediated characteristics. Artificial intelligence is presented as the technical achievement of that purity or at least as evidence that we are closing in on the achievement of purified intelligence. It would appear that many would view this as a wonderful achievement: to produce, finally, objective truth, perhaps truths that we could not think without these machines.
In this view, it’s not that culture, history, media, and rhetoric don’t exist. Instead, they are overdetermined by AI’s capacity for producing objective truth and communicating it (magically I think) into our heads. As a result they just don’t matter very much. I’ve certainly been told quite directly by STEM colleagues on my campus that cultural, historical, rhetorical studies of AI such as mine simply doesn’t count as AI research.
That is a view (and an envisioned AI) that I refuse. How could I not?
So yes. AI refusal strikes me as entirely plausible. It’s not about refusing or denying the existence of a discipline within computer science. It’s about refusing, or more precisely, taking a critical-skeptical stance toward a narrative about AI. And we find ourselves increasingly in polarized positions because we are using social media to have conversations.
Maybe if we can realize how enshitified those platforms are, we can use that realization to find a reason to be cautious about this new platform from the same usual suspects. What evidence do we have that these folks can produce something of social benefit? Not that it’s their job to do that necessarily. But UB didn’t create a Social Media and Society department and start requiring everyone to use Facebook. UB didn’t even create an Internet and Society department in the 90s. [Neither did most universities.] But maybe if we had, we’d be better prepared institutionally and intellectually for AI.
Interestingly, if you look back at the earlier CCCC resolutions regarding digital media, what you will find are efforts to support faculty such as myself by asserting the value of studying the internet and of digital, multimodal composing. We can also keep in mind that the WPA-CCCC Outcomes Statement for First-Year Composing has long included objectives such as
- Understand and use a variety of technologies to address a range of audiences
- Match the capacities of different environments (e.g., print and electronic) to varying rhetorical situations
The study of computers as writing tools has been part of the field since the early 1980s. We aren’t new. In fact, we’ve been at this longer than many of the folks who want to claim expertise.
Leave a comment