How do you feel about the prospect of a future where artificial intelligence plays a dominant role in individual and collective decision-making?

Give it a sec. Explore the affective –> emotional space.

That’s a little bit of predictive selfhood there enacted on a conscious rather than preconscious level. This is the work of Anil Seth and other neuroscientists, like David Eagleman, who are part of a field of probabilistic neuroscience. Eagleman’s “constructed now” describes how the brain stitches together a present moment from sensory data that arrives over time. You can watch them chat about it for an hour or so in this recent podcast.

(n.b. I first published about Eagleman’s research in my 2019 essay “Composing with Deliberate Speed.” However I wrote about him and his TED talk in 2015. So this kind of thinking has been with me for at least a decade.)

For those of you who are familiar with William James, Bergson, Whitehead, and that philosophical tradition of thought, this stuff is basically empirical evidence in support of those philosophies. [Although as Eagleman and Seth discuss in the podcast, the textbooks still reflect traditional neuroscience.]

In short, from this approach we get two interesting principles to work with that already align with posthuman media philosophy.

  1. The brain takes about one second to stitch together a present moment. Sensory data moves through the buffer of the sense memory and into the working memory. Here is where our conscious sense of the world arises (and the executive function acts). From a technical perspective, a second is a long time. So this is a temporal window in which one can intercede technologically. And we’ve been doing that for years with drugs. And we do with education, which helps to create “long-term working memory” that extends capacities along trained paths.

    And we can now do it with brain-computer interfaces.
  2. The brain also has a predictive model. So it uses external sense data to test its predictions rather than building reality up from the data itself. The predictive model is based upon the interoceptive self (i.e., the body). But you can get into negative feedback loops (or that’s the hypothesis). For example, if I am suffering from depression, I have a predictive model that expects the sensory data to convey bad news. My body is primed for bad news, which reinforces the predictive model. And the actual sensory data isn’t powerful enough to alter the predictive model. And so we do different things like take SSRIs, therapy or diet, mediations and exercise to help shift the predictive model and reset it.

I mean, who knows? But it’s an interesting way to think about it. And helps push my work in useful directions.

On the pessimistic speculative end, you end up with Gibson’s meat-puppets and cubicle serfs with wristbands that keep them cheerful and attentive. And we can wash the whole thing from our minds like Severance (or more accurately never encode it). Oh there’s techno-optimistic outcomes too. There’s probably some way you could use two devices to sync people temporally, affectively and emotionally. Or more. I’ll leave that to your imagination. In science fiction you see entities that are often called “riders,” which could be personality simulators that could turn me into an expert negotiator, etc., etc. by modeling my predictions more on an existing algorithm.

But it gets stranger than that. If the brain is a simulator, and we live in a hallucinatory reality (as Seth argues), then there’s probably some evolutionary algorithm? That’s what our rational thinking would suggest. But what if it is something slightly, but crucially, different? What if there are non-deterministic processes from which predictive selfhood arises in the context of material history? That is I am a singular being with a singular history that is assembled with/has arrangements with a number of populations that operate at other spatiotemporal scales (e.g. Homo sapiens sapiens.) The “algorithm” behind (and in other prepositional relationships to) my predicted self (which, importantly, is not generated by the self) emerges from meat history as part of meat history.

There is something ironic about empirical evidence that our brains were making it up all along. It’s cool. It’s the kind of thing that makes me really appreciate the sciences.

Because this is the key posthuman existential point. We’re making it up as we go along, but who are we? Or better yet, if we can expand the potentiality of the predictive self to more possibilities, use media and AI to extend our working memory even farther than we have with literacy, and discover the technical means to help the meat work in an expanded sensory environment, then who knows?

5 responses to “predictive selfhood in the wild: ai+cognition”

  1. Grant Castillou Avatar
    Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    Like

    1. Curious if you think posting the same comment on different blogs is going to accomplish much.

      Like

      1. Grant Castillou Avatar
        Grant Castillou

        Most people who reply to my comment say they have never heard of the TNGS, and want to learn more.

        Liked by 1 person

  2. that’s cool. It would work better for me If you left a response that wasn’t canned. However I did read the article that you linked so I did learn more. The thing I don’t know is if you learned more because I don’t know if you read my post or not based on your response.

    Like

    1. Grant Castillou Avatar
      Grant Castillou

      My only criterion for posting my “canned” comment to a video or article is if the video or article has anything to do with consciousness, for the reasons expressed in the 3rd paragraph of my comment.

      I did read your article. The most pertinent part for me was your reference to Anil Seth. He worked with Dr. Edelman at the NSI in the first half of the 2000’s, and was very involved with the TNGS and the Darwin automata. As far as I can tell, he has abandoned that path for the predictive “controlled hallucination” set of ideas. But I believe the theory and experimental method that the Darwin automata are based on is the way to a machine with the equivalent of biological consciousness.

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending