Categories
digital rhetoric Uncategorized

language, programming, and procedure

Following on my last post, by coincidence I picked up a copy of Max Berry’s Lexicon, which is in the sci-fi supernatural genre, light reading but well-reviewed. It’s basic premise is that language triggers neurochemical responses in the brain and that there are underlying operating languages that can compel and program humans. The result is something that is part spellcraft, part cognitive science and sociolinguistics, and part rhetoric, with the identification of different audiences who respond to different forms of persuasion. In this aspect it reminds me somewhat of Stephenson’s Snow Crash or even Reed’s Mumbo Jumbo, in a far more literary vein.

Conceptually what’s most interesting about Lexicon for me is the role of big data and surveillance. Compelling people requires identifying their psychographic segmentation, which is a practice in marketing research; think of it as demographics on steroids. This is the information produced from tracking your “likes” on Facebook, text mining in your Gmail and Google searches, data collected from your shopper card. Perhaps you remember the story from a few years ago about Target identifying a shopper as pregnant. Maybe this happened, maybe not. But that’s the kind of thing we are talking about.

Where does this get us?

  1. If the better you know your audience, the more likely you will be able to persuade them. I don’t think anyone would disagree with this.
  2. Through big data collection and analysis, one can gain a better understanding of audiences not just in broad demographic terms but in surprisingly narrow segments. How narrow, I’m not exactly sure.
  3. The result is the Deleuzian control society version of propaganda where we are not disciplined to occupy macrosocial categories but are modulated across a spectrum of desires.

Certainly there are legitimate, real world concerns underlying Lexicon, as one would hope to find in any decent scifi novel. It’s also a paranoid, dystopian fantasy that gets even more fantastical when one gets down to the plot itself (but no spoilers here). I suppose my reaction in part is to say that I don’t think we are that smart, competent or organized to make this dystopia real. But for me the more interesting question is to ask are we really this way? To what extent are we programmable by language or other means? This is where one might return to thinking about procedural rhetoric.

I suppose the short answer is that we are very programmable and that our plasticity is one of our primary evolutionary advantages, starting with the ability to learn a language as an infant. One might say that our openness to programming is what allows us to form social bonds, have thoughts and desires, cooperate with others for mutual benefit, and so on. If we think about it in Deleuzian terms, the paranoid fear of programming (tinfoil hat, etc.) is a suicidal-fascist desire for absolute purity, but ultimately there’s no there there, just nothingness. If we view thought, action, desire, identity and so on as the products of relation, of assemblage, then “we” do not exist without the interconnection of programming.

Of course it’s one thing to say that we emerge from relations with others. It’s another to investigate deliberate strategies to sway or control one’s thinking by some corporation or government. It’s Latour’s sleeping policeman (or speed bump as we call it) or the layout of the supermarket. Imagine the virtual supermarket that is customized for your tastes. You don’t need to imagine it, of course, because that’s what Amazon is. Not all of these things are evil. Generally speaking I think we imagine speed bumps are a good way to stop people from speeding in front of an elementary school, more effective than a speed limit sign alone. There is an argument for the benefit of recommendation engines. We require the help of technologies to organize our relations with the world. This has been true at least since the invention of writing. Maybe we’d prefer more privacy around that; actually there’s no maybe about it. It’s one thing to have some technological assistance to find things that interest us, it’s another to have some third party use that information for their own purposes.

I also wonder to what extent we are permanently and unavoidably susceptible to such forms of persuasion. Clearly the idea of most advertising and other persuasive genres is not to convince you on a conscious level but to shape your worldview of possibilities, not to send you racing to McDonalds right away but for McDonalds to figure prominently in your mind the next time you ask yourself “what should I have for lunch?” And even then when fast food enters into our mind as a possibility we might consciously recognize that the idea is spurred by a commercial, but do we really care?  Do we really care where our ideas come from? Are our stories about our thoughts and actions ever anything more than post-hoc rationalizations?

Returning to my discussion of Bogost, Davis, and DeLanda in the last post, I think there is something useful in exploring symbolic action as a mechanism/procedure. As a book like Lexicon imagines, we’ve been programming each other as long as there has been history, perhaps longer. Maybe we are getting “better” at it, more fine-tuned. Maybe it’s a dangerous knowledge that we shouldn’t have, though we’ve been using ideas to propel one group of humans to slaughter, enslave, and oppress another group of humans for millennia. That’s nothing new. If anything though, for me it points to the importance of a multidisciplinary understanding of how information, media, technologies, thoughts, and actions intertwine as the contemporary rhetorical condition of humans.

 

One reply on “language, programming, and procedure”

IMO this is a poorly explored topic which relates to thought disenfranchisement of the general population.

That is to say, large swaths of the population feel incompetent in basic decision making. These people are highly persuadable because they substitute external assistance in decision making for internal critical thinking and research ability. Part of this deal is learning not to care about getting ideal outcomes.

In short, the faster the world turns, and the more that is expected of people in their daily lives, the more people get left behind. They simply fail to reach the effort and critical thinking thresholds to “do life” on their own terms. And many would gladly trade luxury concepts like privacy and autonomy for a voice on their cell phone telling them what to do.

I guess this is a long way of saying that not all humans are equally susceptible to persuasion. Or at least, the amount of effort you’d have to invest in persuading people varies a great deal.

Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.