I am looking to elaborate a pedagogical scaffold for discussing the role of AI in society in the context of media and information systems. So something that students might find useful. Here’s what I have so far.

This ends up having four elements. I started off thinking about cognitive dissonance. It’s a concept students will have likely encountered, at least in pop culture forms, and they may recognize it in themselves. Alienation, in a psychology sense, relates to cognitive dissonance; they can be two reinforcing mechanisms. When we shift to a more structural conception of alienation as a social rather than individual process, then cognitive dissonance can become the experience of participating in an alienating society. While these experiences are common, I would wager, how common is it for us to describe our social-cultural experiences in general as a kind of cognitively dissonant alienation of getting by, or survival or bare life (as Agamben would term it)?

I agree. It’s uncomfortable to think about life in that way.

That brings me to a set of transition states along a gradient. There are believers, who do not find cultural alienating but rather believe fully in the identity they are granted as themselves. From there we can begin to move through intensities of cognitive dissonance until some near-zero degree experience of alienation emerges. At that point one would be as far as possible from an experience of belonging while still living.

This gradient can only be perceived and enacted from the perspective of Enlightenment atomized individuals. That is, each views society as something that is slathered across individuals in either a good or bad way and has some concept of what a better slather might be. Society is the slippery surface of the gradient.

However, approaching this matter from nonmodern, nonwestern, posthuman perspectives, society would not be a slather or even an expertly designed system for human individuals to inhabit. For these pedagogical purposes, I’ll just say that in these modes subjects co-emerge with their others as communities. The critical perspective engages in cognitive dissonance as a site of negotiation: Foucualt’s hope that if prison guards read Discipline and Punish, they would not be able to perform the job in the same way. Agamben’s conception of bare life suggests an escape from biopolitics; his response emerges from affective alienation. Stiegler describes the pharmacological operation of technological systems as both poison and cure. From there, one reappropriates technics away from the processes of disindividuation (i.e. the algorithms that guide AI as it takes a large amount of data and produces a reduced output). This provides my scaffold with its fourth option: distributed care.

An AI’s output is the conclusion of its process. It is not a conclusion in a rhetorical or rational sense. It is an answer but, as users, we can never be sure what the question was. In this respect it is always like the supercomputer in Douglas Adams’ work. Yes, we did deliver a prompt or question. Does the output answer that question? Only if there was some shared understanding; otherwise something else is happening. How does this play out along the gradient?

Belief: AI’s provide me with a great deal of assistance, so their value is proven. They just need to get stronger at what they already do and then they will be as smart as me or smarter.

Cognitive Dissonance: this AI output may not be great but it gets the job done. This could all go quite badly but there’s nothing I can do about it. What works best for me is just to not think too much about how and why I am using AI.

Alienation: AI is taking my work. It is devaluing my work by pretending to do it quickly and easily. I don’t see a place for myself in this life.

Distributed care: we can experience AI as helpful while also recognizing its toxicity. To take care, we strike a balance… continually.

There’s a narrative to be had here if you want one. It starts with belief, moves into the discomfort of dissonance, hits a limit, and is pushed out by alienation; then reforms outside but interfacing with society.

I would contend that the current state of higher education sits within belief. Academia is an AI-believer. It has already placed bets on a particular outcome, often with direct investment into AI. And that outcome is one that believes in the benefits of a tunable, align-ready AI. The cognitive dissonance of critical theory in its various academic registers helps to illuminate the centrality of belief and where heresies and counter-stories start to emerge. These too can be extracted for content and value. Within this scheme, even alienation becomes alienable: a style to monetize, another AI generated mood.

In this context we are left with the question of whether or not there is an outside to this empire of AI. This is a question arising from Hardt and Negri’s concept of empire, so it is a longstanding. Is there an outside to empire? And if not, where does it put its waste?

Meanwhile, I will provisionally suggest a kind of outside, perhaps along the lines of Stiegler, where we can begin with recognizing that there is a concept of outside that is inside and that this inside-outside zone is in itself an interesting speculative space. From that zone, we might attempt “distributed care.” We are both inside and outside AI. We encounter the capacities that arise for us, and we are exposed to AI’s toxicity. This is part of our existential condition.

At this space-time-event, one might attempt a new belief and head in that direction. It’s been done before. Or one might seek some alternative to that gradient. From there? Who knows.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending