How do you feel about the prospect of a future where artificial intelligence plays a dominant role in individual and collective decision-making?
Give it a sec. Explore the affective –> emotional space.
That’s a little bit of predictive selfhood there enacted on a conscious rather than preconscious level. This is the work of Anil Seth and other neuroscientists, like David Eagleman, who are part of a field of probabilistic neuroscience. Eagleman’s “constructed now” describes how the brain stitches together a present moment from sensory data that arrives over time. You can watch them chat about it for an hour or so in this recent podcast.
(n.b. I first published about Eagleman’s research in my 2019 essay “Composing with Deliberate Speed.” However I wrote about him and his TED talk in 2015. So this kind of thinking has been with me for at least a decade.)
For those of you who are familiar with William James, Bergson, Whitehead, and that philosophical tradition of thought, this stuff is basically empirical evidence in support of those philosophies. [Although as Eagleman and Seth discuss in the podcast, the textbooks still reflect traditional neuroscience.]
In short, from this approach we get two interesting principles to work with that already align with posthuman media philosophy.
- The brain takes about one second to stitch together a present moment. Sensory data moves through the buffer of the sense memory and into the working memory. Here is where our conscious sense of the world arises (and the executive function acts). From a technical perspective, a second is a long time. So this is a temporal window in which one can intercede technologically. And we’ve been doing that for years with drugs. And we do with education, which helps to create “long-term working memory” that extends capacities along trained paths.
And we can now do it with brain-computer interfaces. - The brain also has a predictive model. So it uses external sense data to test its predictions rather than building reality up from the data itself. The predictive model is based upon the interoceptive self (i.e., the body). But you can get into negative feedback loops (or that’s the hypothesis). For example, if I am suffering from depression, I have a predictive model that expects the sensory data to convey bad news. My body is primed for bad news, which reinforces the predictive model. And the actual sensory data isn’t powerful enough to alter the predictive model. And so we do different things like take SSRIs, therapy or diet, mediations and exercise to help shift the predictive model and reset it.
I mean, who knows? But it’s an interesting way to think about it. And helps push my work in useful directions.
On the pessimistic speculative end, you end up with Gibson’s meat-puppets and cubicle serfs with wristbands that keep them cheerful and attentive. And we can wash the whole thing from our minds like Severance (or more accurately never encode it). Oh there’s techno-optimistic outcomes too. There’s probably some way you could use two devices to sync people temporally, affectively and emotionally. Or more. I’ll leave that to your imagination. In science fiction you see entities that are often called “riders,” which could be personality simulators that could turn me into an expert negotiator, etc., etc. by modeling my predictions more on an existing algorithm.
But it gets stranger than that. If the brain is a simulator, and we live in a hallucinatory reality (as Seth argues), then there’s probably some evolutionary algorithm? That’s what our rational thinking would suggest. But what if it is something slightly, but crucially, different? What if there are non-deterministic processes from which predictive selfhood arises in the context of material history? That is I am a singular being with a singular history that is assembled with/has arrangements with a number of populations that operate at other spatiotemporal scales (e.g. Homo sapiens sapiens.) The “algorithm” behind (and in other prepositional relationships to) my predicted self (which, importantly, is not generated by the self) emerges from meat history as part of meat history.
There is something ironic about empirical evidence that our brains were making it up all along. It’s cool. It’s the kind of thing that makes me really appreciate the sciences.
Because this is the key posthuman existential point. We’re making it up as we go along, but who are we? Or better yet, if we can expand the potentiality of the predictive self to more possibilities, use media and AI to extend our working memory even farther than we have with literacy, and discover the technical means to help the meat work in an expanded sensory environment, then who knows?





Leave a reply to Alex Reid Cancel reply