Death by Powerpoint has been a cautionary tale since the term was coined in 2001. We all still experience it, perhaps we even practice it. (It has its uses.) The rhetorical deadly nature of Powerpoint was named 14 years after it was released in 1987, but it was experienced almost immediately upon its use. Death by Powerpoint is not simply about poor design and presentation choices. It has become a post-hoc recognition of the way Powerpoint has shifted the practices of workplace and classroom communication. But much like the proverbial frog in the boiling water, we normalized these shifts. Today the rhetoric of powerpoint dominates. Nearly every formal meeting is organized by one.

AI workplace tools are poised to be the new Powerpoint or new MS-Office, but also clearly something other and more than those software (which revolutionized workplace cultures 30-40 years ago). Or at least that is part of frontier AI business plans. In the workplace, AI becomes a site of efficiency, compliance, and epistemic regularity. Tone, structure, and risk awareness are all standardized. Dissent becomes “considerations.” Ambiguity is replaced with institutional confidence and clarity. These are viewed as content-neutral improvements. Just as we said that Powerpoint helped to make our thoughts more clear rather than shaping them, we say the same of generative AI (though it may require a little more willing suspension of disbelief at this point).

Let’s consider the particular operation of AI in this regard. How does it operate to shape workplace discourses? As faculty we have already discussed this in terms of student writing. AI can generate predictable workplace prose because it has a large corpus to draw upon. This would especially be the case for a corporate-specific AI with access to internal communications. Training on those, an AI can adapt to generate outputs that are predictably similar to human ones (from a standard human perspective). With this capacity, a workplace AI can easily smooth human composed texts to become more predictably like corporate discourse. This is what occurs when we experience efficiency, clarity, and confidence in an AI generated text.

AI literacy is ambivalent on this effect. While on the one hand these are desired effects, we also recognize that it introduces new risks that human users must be educated to absorb. This is the other part of “AI ethics.” It’s the part where we say that all of the technical AI ethics design is worthless without human users who will take responsibility for it. An AI is a machine. We expect it to operate within design specifications, but we cannot hold it “responsible.” In the workplace, theoretically, AI produce output faster than humans can account for it without another mediating machine. We find ourselves at further removes (“in the loop” and then “on the loop”) for an increasing amount of output.

AI literacy educates students to accept these ethical burdens. This is AI’s version of death by Powerpoint: a collapse of the time and opportunity for thought.

What will this mean for corporate discourses? Fundamentally, AI makes discourse more predictable, because that’s what its outputs aim to be. It’s hard to know how smooth and undifferentiated corporate discourses will become. I’m sure it will depend to some degree on each corporate culture. The more hierarchical they are, the smoother they will become, one imagines. The abstract question corporations will need to consider is how much salience they want to offer to unpredicted human prose. When everything sounds the same, it all feels the same, despite the content.

Universities are often driven by workplace demands, especially on the STEM side. When I look at universities like my own, it seems likely that they will frame AI literacy largely in terms of workplace demands and the broader hegemonic commitments to AI as an intellectually settled matter. This isn’t to say there won’t be concerns about ethics. As I noted above, in addition to teaching some flexible approach to functional literacy (i.e., learn to use these machines in the way we want you to), the other main matter is teaching students to take individual ethical responsibility for AI outputs. Rhetorical-compositional literacy lies ambiguously in between media function and ethics, as it always has in some sense, as a way to make ethical choices with tools.

After all, someday one of those students will become a radiologist who spends their day reviewing AI-generated diagnoses for mistakes. That’s a big ethical burden for that individual. They will need ethical training and rhetorical skill to accept responsibility within the AI media ecology.

What are we losing? AI discourse aims to be frictionless. This runs directly counter to the pluralistic and agonistic ethics of academic life if not democracy. It is the erasure of difference within a predictable, guardrailed representation of difference.

If we were to do something different, it would require doing the one thing that universities seem incapable of in relation to AI. We would need to wait. We would need to deliberate and engage differences rather than smoothing them out. Do we have a consensus definition of what AI is in relation to “literacy”? Have we even talked about it? Instead, we end up with AI-generated instructional designs and outcomes. And even it they are not technically the outputs of “engineered prompts,” they are shaped by their logic (whatever it may be) rather than our own.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending