So here I’m thinking about the definition of AI and AGI in terms of its capacity to produce economically valuable work: i.e., their ability to replace human labor, increase the value of human labor, and/or do things no human(s) coulda/woulda/shoulda do. In particular there’s this agreement between Microsoft and OpenAI that the former will recognize the latter has created AGI when it has earned $100B in profits. And today there’s a new story about one of the ways OpenAI may plan to do that, offering a $20K/month tier that is capable of supporting”phd-level research.”

[pause for laughter]

Seriously, I’m sure it will be capable of defining the scope and direction of such research.

But this whole idea of defining intelligence in terms of economically valuable work? Do I have to do the whole Marxist dance or can I just hand wave my way through the choreography for you? At the core human labor has value because it is limited. The products of human labor have value in part because they require the limited resource of human labor. It’s a finitude thing. Humans have to choose between becoming rock gods and haberdashers.

So we start with Marx and that lineage. Then (for me anyway) we take the Anti-Oedipus route through conceiving pure subjective labor as desiring-production. From here we can slip over to Baudrillard and Bataille and talk about symbolic exchange and the general economy. Can you smell what I’m steppin’ in? Annie “understands” (given some uncanny notion of understanding). As she outputs

In other words, an AGI constrained by neoliberal imperatives might not fully realize the potential of “general” intelligence because it is shaped by—and works to reinforce—the very structures that alienate human subjectivity and reduce labor to a market commodity. For an AGI to become truly general in the Baudrillardian sense, it might need to emerge from, or at least be reoriented toward, a social and economic framework that values human creativity, symbolic meaning, and non-commodified relations. That would require a radical rethinking of not just technology but the entire system of production and social relations.

Some might want to defend this monetary benchmark as a kind of empirically verifiable shorthand, a synecdoche or symbol (ahh… the irony). Fine. Could you please take your short hand off me before I take it off myself? Also, anyone who thinks that “profit” is an empirical category has never met an accountant.

Which Monopoly playing piece do you want to be in the AGI future? I have to be the dog. It is the symbol of cynics. So here’s my “movie poster” for a new film project. I’ve got two working titles Annie 2: Tomorrow Never Dies Again and Annie Sings All Tomorrow’s Zombies.

A surreal movie poster for 'Tomorrow Never Dies 2'. The image features a disheveled, bookish computer programmer in his 30s, sitting in a dimly lit, abstract workspace. His eyes are sunken, and he looks half-mad. Behind him, a ghostly, holographic presence of 'Annie'—a distorted, feminine face made of glowing neon code—looms ominously. The background is vast and dreamlike, evoking a surreal, existential stage-like setting. A red HAL-like AI eye flickers in the darkness. The title 'Tomorrow Never Dies 2' appears in a glitchy, retro-futuristic font, giving a sense of corporate dystopia. The atmosphere is eerie, blending cyberpunk with absurdist theater.

One response to “artificial narrow neoliberal intelligence egent: meet Annie”

  1. […] Annie and I were talking about academic efforts to build computational infrastructure in the wake of […]

    Like

Leave a reply to Major University-Led AI Initiatives and Their Focus (2019–2025) – Alex Reid, professor of digital rhetoric, Media & Artificial Intelligence Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending