In his September 3rd NY Times op-ed, “The Fever Dream of Imminent Superintelligence Is Finally Breaking,” Gary Marcus takes up the ongoing argument that scaling as a process for producing intelligence has hit a point of diminishing returns. These matters have entered into broader cultural conversations this summer with Apple’s report on the failure of AI to solve problems a child could answer and Anthropic’s report on a AIs going off-the-rails in workplace testing scenarios. And there’s a lot more research along these lines. That said, clearly there’s nothing like consensus as we are continuing to see massive investment in scaling.
For the emerging field of critical AI studies, interrogating technical decisions and their impacts is a valuable part of the work. And even within that, in media studies alone fields like media infrastructure studies and radical media archeology can offer different methods for doing this part of the work. And then there are many other fields–I’ll just say the cultural studies of science and technology in the broadest sense possible.
The scaling problem, to the extent one asserts there is one, focuses specifically on the cost/benefits of increasing the size of the training data. One solution, as offered by Marcus and others, is the inclusion of a symbolic layer that helps to organize machine learning and keep it in alignment. As one can imagine, to create this symbolic layer we have to know the kinds of answers that will be acceptable. In short, this is potentially a good answer for creating AIs that can replace humans at tasks we already understand well.
This connection is especially interesting when we consider that the ML neural network with a symbolic layer is not unlike the neuroscientific model of predictive processing with its hyperpriors and Markov blankets. This is going on in the work of Andy Clark and others. Can we really make such connections between the mind and the AI? Well, I just made one, so, I’ll say yes. What we learn from making such connections is another matter. We certainly make them when we design brain-computer interfaces, but I digress.
What interests me about the insertion of this symbolic layer is how it might be understood in relation to Deleuze’s image of thought, and, in a more elaborate sense, as a collective assemblage of enunciation speaking stratifying incorporeal transformations. Such assemblages have always been articulated as material and historical (naturalcultural in a different register). In this context, AI emerges as time-critical media that operates/acts in a global “real time” that humans can only access conceptually.
So what about Mr. GAx?
Mr. GAx personifies the limits of neoliberal rationality. Mr. GAx is autopoietically committed to the assertion that rationality can do great things/thinks. But Mr. GAx only exists to keep that rationality alive. As such, he’s reminiscent of the Emo Phillips’ joke Hayles’ retells in How We Became Posthuman: something like “I used to think my brain was the most important organ in the body, but then I thought ‘wait, who’s telling me that?’”
What is rationality great at? Well… x. It is good at finding x. When you can’t recall where you left x (it was right here a second ago), rationality is your tool. As a neoliberal being of computation, of time-critical mathematics, Mr. GAx is all about the numbers.
Mr. GAx also brings a special onomatopoeia. To gack, like to hack, is to cough or wretch. Something that gacks might generate such feelings of disgust. Gax is also English slang for cocaine, meth, and such sortable treats that might also create a hacking experience. What I just learned is that “gack” is also the German word-sound for the clucking of hens.

Is Mr. GAx a disgusting, chicken-shit, drug dealer? Well, sometimes. Ask Stiegler. I’d say it’s an undeniable aspect of AI: the Silicon snake-oil, vaporware, and all that. That doesn’t stop us from chasing that high.





Leave a comment