Among Paul Virilio’s well-known concepts is that each new technology creates its own accident. I’ve discussed this earlier suggesting that generative AI creates the accident of AI-generated versions of all former accidents. We can think of this as alternative or as a different way of describing the same accident.
An epistemic collapse occurs when there is no shared process of verification in a community. That is, there is no agreed upon way to determine what knowledge is. There is a fair amount of discussion of our post-truth, intensely partisan society as both a product and contributing factor to epistemic collapse on a social level in the United States, for example. However, we might also think of epistemic collapse in the context of Kuhn’s paradigms of normal science or Foucault’s regimes of truth, etc. In that sense, while destabilizing, epistemic collapse is a regular part of institutional life, at certain scales.
The operational epistemic collapse of artificial intelligence is related but quite different from these scenarios in its technical mediation. When a user enters a prompt into a frontier AI chatbot, that prompt makes an epistemic claim. That is, the prompt asserts that it has particular content which it presents as knowledge, however provisional. The AI’s output is necessarily non-epistemic for the same reason it is non-ethical. The AI does not exist in such a way that it can know or claim to know the content of its output. Quite simply, epistemic claims enter the machine but do not exit: instead, the epistemic content collapses into statistical distribution.
We can read AI output, and we can make knowledge claims about it. We can say, “there’s a good idea in here.” We can put our reputations, our ethos, on the line. The AI can’t, even though it performs that sycophancy incessantly. But what has happened there rhetorically? A non-epistemic linguistic trace of AI operation is granted epistemic status by the reader. It’s a misrecognition. Misrecognition always threatens epistemic collapse, but when it is systemically designed into our media systems, as it is with AI chatbots, the problem intensifies. Then it becomes the AI generated accident of all accidents again.
Why must it be an mis-recognition to recognize epistemic claims in AI output? Because it would require re-cognizing something that has never been cognized. There are no thoughts to recognize.
Leave a comment