Categories
artificial intelligence digital rhetoric media studies new materialism

from paper clips to brainstems: large language models and nonhuman rhetorics

A familiar cautionary tale about AI is the paper clip maximizer, where an AI destroys the world in pursuit of its seemingly banal task of making paper clips efficiently. The story imagines an AI with general, even superhuman, intelligence and agency. The debates continue about whether or not AGI will ever be achieved. But what about the more limited, though rapidly expanding, AIs we have now that operate on large language models?

Tristan Harris and Aza Raskin have an interesting presentation on this they term the “AI Dilemma.” Here’s some of where this is coming from. In the last decade, machine learning/AI has had a remarkable impact on how users experience social media platforms. Instead of being asked to maximize paper clip manufacturing, they were tasked with maximizing attention and engagement. As I discuss in Rhetorics of the Digital Nonhumanities, the fundamental aim is simply to get users to buy, to scroll, to refresh, to play, etc.: in short, to click. And they have been tremendously successful. Billions of people spent billions of hours clicking on these platforms and making these corporations among the most powerful in the world.

How was that accomplished? In some respects the same way we’ve always gotten others to do things: by appealing to desires and fears and by taking advantage of one another’s limitations. In short, with rhetoric. Or, you tell me? What keeps you going back to Facebook and Twitter? Binging Netflix? Watching TikTok or YouTube videos streaming one after another? When we said media was a pharmakon, what did you think that meant?

As folks like Harris and Raskin would say, that was in the good old days of curation AI, when the algorithms would do their best to pick the next piece of content that was most likely to keep your attention. It makes me think about one of the earliest viral videos I can recall, “David After Dentist,” where the young David, coming down off nitrous or whatever, wonders “is this real life” and “is this going to be forever.” Of course he recovers quickly enough. However, for those of us watching that video, this did become our real life, it became a kind of forever (or at least so far): sitting in front of a screen watching AI-curated media. As to the cultural effects of AI-curated media? Well, we all know that answers to that from fake news to conspiracy-driven insurrections, stochastic terrorism and mental illness, climate change denial and anti-vaxxers, etc. etc. And not out of any malice on the part of the AIs (or possibly even their corporate minders), just as the paper clip maximizer means no harm as it completes its bloody task.

Now, we have the start of something new with generative AIs built on large language models (e.g., ChatGPT). Alan Kay imagined computers as “fantasy amplifiers” and the generative AI represents a significant step in that direction. These machines won’t need to curate content when they can create their own. And perhaps they can do tremendous good, as suggested in this recent study about using GPT-3 generated messages to persuade people about the benefits of vaccinations. That’s all well and good, but as we already know from rhetoric, it’s just as possible to argue the other side… of anything. Then you have two (or more) competing generative AIs composing media to achieve this narrow aim. Could one AI persuade the other AI that their position on vaccinations was incorrect? Of course not. Neither one has a position on vaccinations! But what one AI might be able to convince/trick the other one to do is trick them into thinking that, for example, sending pro-vaccination messages was a good strategy for getting people to oppose vaccines.

That sounds like some magical, rhetorical judo, but it is actually one of the more common forms of persuasion. You know… we shouldn’t be arguing. I think we want the same things… Like taking candy from a baby. You do it by distracting them with another shiny object. Rhetoric has always focused on the levers in the human mind, the weak spots, the pressure points, however you want to think of it. Audience awareness is an analysis of a group’s predispositions, beliefs, prejudices, etc., even things, especially things, they wouldn’t say about themselves. We understand sex sells and if it bleeds, it leads. These are cliches.

So we get this metaphor of “hacking the brainstem” because we’re going after the basest operations of the human mind. Of course it’s a metaphor because this isn’t The Matrix. Nothing is getting jammed into the base of your skull, and nothing needs to. Because rhetoric just “hacks” symbolic behavior. It is symbolic behavior. To be clear, symbolic behavior isn’t the “OS” of the mind; that’s a metaphorical bridge too far. We aren’t being programmed. And this isn’t some “us” vs. some “them.” We’re talking about the real material and historical operation of symbolic behavior, of code, of a techno-semiotic stratum. Humans gain new capacities through their interactions with language, symbols, and media.

Likely we’ll gain new capacities through our interactions with generative AI. But that doesn’t mean that it is wise to pursue that direction. We gained new capacities from developing the atom bomb. Was that wise? The jury is still out, but we haven’t irradiated the planet yet. We gained new capacities from the invention of monotheistic religions. Was that wise? Capitalism?

My perspective is that this is a matter of negotiation. Yes, a matter of rhetoric but also a matter of winding detours, of negotiating the path across a mountain pass: material, environment, ecological, as well as symbolic and mediated.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.