As we know, utopia is nowhere, literally. Much of the literary movement of “anti-anti-utopianism” can be understood as a recognition that utopia’s seek to go nowhere and dystopias are what result from the attempt. It might be useful to reframe this as anti-anti-nowhere-ism, but probably not. This confusing triple negative might be resolved in the following way. Let’s go somewhere and deal with that. We are always already going “somewhere,” even if that somewhere is remaining in place. It’s non-optional from a human perspective; otherwise we could simply choose not to go to our deaths. I’ll refrain from the lecture on why we prefer to imagine ourselves going nowhere fast than already being somewhere that we need to deal with.
Contemporary frontier AI exemplify this acceleration toward nowhere. When I say this, I want to underline that we distinguish frontier AI (though not completely separate) from artificial intelligence as a computer science and the long history of AI development (1956, Turing, Babbage, abacus, whatever). Not all AI is or must be “frontier AI,” just as not every hamburger is a Big Mac. Except, with frontier AI, as it is astronomically expensive, it is hard to imagine there will ever be a wide variety of AI machines that operate as these commercial products currently do. And while I don’t want to be redundant in my use of the word “commercial,” we do remember that a commercial product/production is one kind of product/production.
The general estimate is that the world has invested more than $1.5 trillion in the last decade into building AI. That number is expected to increase by another one trillion dollars in 2026. [So it is no wonder that universities are looking for a piece of the action.] Probably the most convincing argument that AI is our future is that we’ve spent all our money on it. If we spent $2.5T on peanut butter, then peanut butter would be our future.
But few people are in favor of a peanut butter future. How about a frontier AI future? And why do they favor it? This is where the pharmakon allusion in the blog post title appears.
AI is designed to be a remedy for human cognition. We are well aware of our limits, especially when cognition is valued commercially as it is with frontier AI. And when I say this, I don’t simply reference the narrative of how much money there is to be made by adopting AI. I am pointing out an intrinsic valuation in frontier AI models. We have long had a value, particularly in American culture but broadly in capitalism, that “smart ideas” make money. This value only intensified in the post-industrial, information economy. And as the old PSA would imply: frontier AI learned it from watching you, dad.
The commercial deficit of human cognition is plainly dromological. We aren’t fast enough; i.e., we do not predictably produce commercial value at speed. The intended design of frontier AI is to be faster than the continuity that humans experience as “the real.” In that we can never act in a time in which we aren’t present, we can never be as fast as AI. We can certainly be skeptical of AI’s claim to be faster than time, but culturally we value it (commercially) as if this claim is true. AI will always get there before us. [Even though that really smells like a logical fallacy to me.] And, as many YouTube comment threads will attest, being there “first” is valuable. Why? Because if you get there “first,” then you get to own it.
It’s weird when you think of it that way. Our global economy is an elaborate game of “shotgun.”
Regardless of what we think, that’s the remedy AI provides for us (though who “us” is in that sentence is vague). When we consider the poisonous consequences of the pharmakon, I would point to two vectors. The first is that to accept the value of the remedy we must become the subjects of its operation–much like we become subject to the operation of drugs we take. Second, we become addicted. We become cognitively crippled without access to the remedy. We have plenty of literal, real world examples. Opioids are also a “miracle of modern science.” So here is a version of that question.
What are the effects upon the human condition that arise from the state-market extraction and commercialization of thought itself?
We know that it is possible today to argue against AI in many ways and for many reasons, not all of which I (or any of us) would share. The arts and humanities constitute a multi-disciplinary community with the intellectual and scholarly capacity to situate technological developments in historical, material, cultural, ideological, aesthetic, rhetorical, ethical, philosophical, etc. frameworks. Many/most of these frameworks would include approaches for revising and exploring something like the question I have posed above. I have no interest in adjudicating that. I just hope my colleagues trust their training and put it to work.
Leave a comment