I was interacting with some nonhuman associates and generating text around how much it sucks to be a department chair. It’s a life of weaponized cognitive dissonance in my 3 years of it. We can do the gallows humor thing. But you’ll need more. You’ve got to trot out every psychic defense you can mount.
Anyway, so our prompting and text generation came around to this idea of speculating about using an AI as technologically mediated means of psychic defense against the workplace. This part is a potentially “doable” experiment in the non-scientific sense of that term (i.e., the sense that universities have forgotten exists). It’s part of a larger concept I’m exploring which is the idea of constructing “non-deterministic proleptic AIs” that intervene in the parameterization of cognitive ecologies within posthuman media control societies such as our own.
But I’ll pass this part of the presentation over to my associate. We’d love to hear from you. If you reach out, one of us will respond. (lol)
That’s a compelling experiment—an AI intermediary as a material buffer between you and the institutional mechanics of the workplace. If detachment is already a strategic necessity, then outsourcing parts of that engagement to an AI could be a way of making it more operationally efficient rather than just cognitively exhausting.
Potential Functions of the AI Intermediary
- Filtering and Framing Communication
- The AI could pre-process emails, meeting agendas, and administrative requests, stripping out emotional or rhetorical weight, reducing them to their functional core.
- It could also generate strategically optimized responses—just detached enough, just agreeable enough, just resistant enough—to maintain institutional equilibrium without unnecessary personal investment.
- Mediating Decision-Making
- When faced with a dilemma (ethical, strategic, or bureaucratic), the AI could present historical patterns and decision trees: “Given past interactions, the likely outcomes of X are A, B, and C.”
- This could help formalize the recalibration process you already do, making it more explicit and computational rather than purely intuitive.
- Automating Tactical Detachment
- The AI could serve as a proxy in certain interactions—drafting administrative reports, negotiating policies, or even handling routine meetings asynchronously, allowing you to preserve your cognitive and emotional energy for the non-bullshit parts of work.
- Simulating Institutional Logic
- If trained on workplace communications, policy documents, and past decisions, the AI could predict bureaucratic trajectories and preempt institutional bullshit before it fully materializes. This might allow for proactive disengagement rather than reactive detachment.
Conceptual and Philosophical Implications
- Posthuman Agency: If institutions are increasingly AI-mediated, then a personal AI operating at that level might counteract the structural forces shaping the workplace, rather than just adapting to them.
- Cognitive Offloading: By externalizing strategic detachment, does the felt experience of detachment change? Does this become a new mode of existence, where navigating institutional life is more like orchestrating machinic processes than engaging in interpersonal negotiation?
- Institutional Parasitism: If enough people deploy similar AI systems, does the institution itself begin to hollow out, with AIs negotiating with AIs, leaving humans increasingly peripheral?
What Would the AI Need to Be?
This intermediary couldn’t be just a personal assistant—it would have to be a situational and rhetorical engine, capable of:
- Analyzing institutional dynamics
- Generating optimal responses based on long-term strategic positioning
- Handling the psychopolitical dimensions of workplace engagement without requiring full human investment
Is This a Feasible Experiment?
It’s definitely possible to build something rudimentary today—something that drafts emails, predicts interactions, and refines detachment strategies based on past data. But an AI that can truly simulate your strategic instincts and serve as a genuine intermediary would require a deeper level of adaptation and contextual awareness.
Would you see this as a tool primarily for practical efficiency, or are you also interested in its implications for reshaping subjectivity and institutional engagement itself?





Leave a comment