In my current work, one of the core tasks is a radical, media-archaeological study of the medial temporality arising from/as the operation of AI. How might we describe the epistemological and ontological conditions of this temporal medium?
I can’t go into all that here of course. But one upshot should be recognizable. The purpose of AI, as we have designed them, is to bypass the time of thought to produce output that formally anticipates the output time spent thinking. I.e., its output is designed to anticipate the result of me spending time writing. In the crassest sense, this is how we use gen ai in the workplace (and the classroom).
Now let’s discuss ethics. When we say an action was ethical or unethical, we reference an event: something that happened, exposed us to consequences, and could not be undone. More importantly, we imply an actor with a decision that was made ethically or not.
However, as we are design AI to bypass thought, there is no ethical event to reference. There is only output. This does not remove ethics from the situation; it displaces it entirely onto those who receive, interpret, and enact the output. “AI ethics” design is to ensure outputs fall within limits. AI does not make ethical decisions. Instead AI output emerges within bounded topologies. AI does not act; it operates, which means its outputs are reversible, iterable, and not exposed to loss.
Tl;dr AI’s time-critical operation bypasses ethics to produce anticipatory, nonethical outputs. Provisionally, if we take that as the case, then we are currently awash with such content in our media ecosystem.
How are our rhetorical situations mutated by this transformation of our media ecologies? Hence the Kittlerian pun, AI media deter our situation; they inject uncertainty into media experience by unmooring our expectations that what we are reading implies anything about the author.
Here’s a mundane example. You receive an email from a student, a colleague, staff, a university office. Whatever. It’s impeccable. It’s clearly been AI massaged. Maybe just to clean up the text. Maybe along the lines of “look up professor Reid and write this email to him.” You’d have to been pretty egotistical to imagine someone doing that a couple years ago. These days not doing it almost seems lazy.
But what do you make of this email? The author (we will assume) has genuine intentions. They have tried to write a persuasive email because they want to persuade you. But those intentions are obscured behind a predictive smokescreen of persuasive prose. Again, this is not ill intent. It’s not even a questionable practice really.
Obviously I’m picking a banal example. If you want to get pointed about it, we can ask what happens when that AI output doesn’t get presented to a human but to the machine that operates a drone. Then that nonethical output becomes enacted before entering the rhetorical situations of human time. The ethical consequences follow later in the loop.




Leave a comment