OK, this post will be filled with theoryspeak, but I’m trying to boil down for myself the articulation of subjectivity in my book.
First, a couple groundrules.
1. I talk about cognition, consciousness, and subjectivity as interrelated concepts and/or processes.
Cognition designates the activity of data-processing mechanisms, both those that are embodied in the conventional sense (e.g. the brain, the nervous system), those attached or implanted in the body (e.g. mind-altering chemicals, VR helmets), and those external but nominally connected to the body (computers and computer networks). Most of the process of cognition exist beyond or beneath our conscious awareness.
Consciousness, then, is an epiphenomenon that is produced as a result of cognitive activity (more on how this happens in a minute). Basically though, a significant part of the cognitive processing that goes into taking an action or making a decision occurs at a level below our consciousness. This in itself is nothing new since Freud created his theory of the unconscious. What is new though is recognizing that “intelligent” machines play a part in cognitive processes at this level. Anyway, consciousness is where we become aware of our thoughts and where certain cognitive processes occur in which we try to understand the choices we make and apply certain socio-cultural principles to our decisions (e.g. ethics, logic, etc.)
Subjectivity, emerges from those conscious cognitive processes from the intersection they make with the socio-cultural. Subjectivity is the relationship between felt experience (affect) and cultural identities and values our consciousness attaches to them. The role we imagine for ourselves/have imagined for us in various cultural discourses.
To take an obvious example: you might walk into a department store and find (or not realize as the case may be) that the music, lighting, color scheme, decor, even the scent of the space have all been designed to put you at ease and encourage you to shop. Even though you may be consciously aware of such strategies, you still feel their effect, just like knowing that alcohol makes you drunk doesn’t stop you from getting drunk when you drink it. And, after all, you didn’t walk into the store because you didn’t want to buy something, right? Then your mind says something like “you’re a tenured professor now; you really should start dressing better” (well, my mind might say that). Obviously here my subject position (or one of them) is starting to assert itself. Then another part of me might say, “Nah, I want to buy something that makes me look ‘cool’” (whatever that is). All of this is going on in my conscious mind where I am evaluating, to some degree these and other thoughts (e.g. what I can afford, what my wife will say, etc.). All during this time I am maintaining a continuous feedback loop with my imagined subject positions and their discourses–that’s ideology. But I am also connected to these cognitive processes, where I am getting “gut reactions,” spasms of affect, associated with various thoughts. However these affects do not originate solely within my body, but through an embodied network with information-media networks. When I look at a particular product and get a feeling, that isn’t simply ideology or advertising as we normally think of it. This is a different realm of power that intersects our consciousness production in a way that goes beyond Althusserian ideology or Foucauldian power and into Deleuzian control and Baudrillardian seduction.
And it is in the latter space that I am articulating cognition, consciousness, and subjectivity–and trying to do so in a way that recuperates something akin to the concept of agency, that is a means of political activity to oppose the spread of the potentially (and increasingly obviously) fascistic tendencies of virtual capitalism.
So how do I play this game? Largely through building on Brian Massumi, Manuel DeLanda and Deleuze.
The topological in “topological subjectivity” references a mode of geometry in which shapes are viewed with greater fluidity and greater symmetry. For Deleuze this would be the virtual, undecided state, a state very different from the way we typically speak of the virtual now (as in VR), which is a fundamentally preprogrammed and digital state. There is where I begin in the second part of my book, which treats this topic.
As there is no “origin” to thought we can only start in the middle of the process. Information enters the brain through a range of inputs. Some provide information about the body’s internal systems; others provide sensory data about external conditions. While all of this information reaches the brain in analog form (electrochemical transmissions), some of it contains digitally-encoded information (e.g. when you’re eyes transmit information about words on a page). Or at least that analog information triggers digital codes stored in the mind. Indeed much of this information does (e.g. I have an itch; I’m hungry; I stepped in something wet, etc.). This analog information takes the form of various intensities, as they mingled together, they coalesce into particular thoughts as they take the form of language and emerge in our consciousness, where they then intersect with cultural concepts coming out of institutional discourses.
One way Massumi discusses this is by examining the difference between proprioception (an embodied sense of direction and muscle memory) and Cartesian space. He talks about the processes we use for navigating both strange and familiar spaces. Ultimately the notion here though is that, in terms of geometry, the differentiated spaces of Cartesian space emerge from less differentiated topological spaces, which would include proprioceptive spaces. As we navigate there has to be a process of translation up and down this system of intensities.
How does this related to new media? Well, I take a sci-fi example, The Matrix. The virtual world in the movie is obviously constructed from digital information. However that information must be downloaded into human brains through an interface at the base of the skull. That interface intercepts and simulates all information the brain is receiving and translates back and forth to the network. However the translation into the analog, electrochemical processses of the brain is a step into a less differentiated space where sensory information and affects can slip across senses (synaesthesia) and activate idiosyncratice memories (Barthes’ punctum). Not the least among such translation “errors” can be a sense of uncanny paranoia, which Neo articulates in the question, “What is the Matrix?”. So we can see here in the process of cognition itself that the translation of digital information into the analog systems of the human brain creates a certain level of undecideability, a pharmakon-effect, if you will, that is the seed for a new kind of agency. This sensibility can be seen in the trilogy, which begins with the paranoid fantasy of Neo as a messianic savior (in the first film), move through a crisis in the notion of free will (in the second), and ends with neither victory nor defeat for the humans or machines but a insinuation of otherness and indeterminacy into the system as Neo rejects his status as the “One” who fights and destroys the evil Agent Smith and instead becomes the “Other” who merges with Smith’s fascistic oneness to recreate the Matrix.
As I move into thinking about consciousness, I stay with The Matrix, but move from the film’s content to its production, specifically its use of special effects in the “bullet-time photography” shot and the “burly brawl” in the second film. I compare this to the uncanny feeling generated by early cinema as a kind of confrontation with our own mechanical nature represented on screen: a feeling that was explored in film itself through the use of the doppelganger effect and stories about doubles. Here though, rather than mechanical doubles, we are confronted by a digital multiplicity. That is, new media confronts us with our informational nature, and the burly brawl with its multiple Agent Smiths is similar to what we see in the doppelganger of early film. Our conscious minds seek to generate a sense of unity as the “I,” so this fragmentary representation can be disturbing, as is, I think, the notion of a distributed sense of self with one’s identity strewn across various networks.
The difficulty here lies in our longstanding, logocentric association of consciousness with speech, as a kind of command and control software, an operating system, for the body. This notion relies upon an a priori wholeness to the conscious mind, which, at least in Cartesian terms, is ultimately secured by God. The digital uncanny intensifies the filmic uncanny by shifting from double to multiiplicity, a step that Freud always resisted taking as Deleuze and Guattari discuss. However, if we view consciousness as always already exterior, as Derrida’s central argument contends (this would actually be something I cover in the first part of my book), then informational networks that generate our digital multiplicity simply extend (and perhaps make more visible to us) the process of consciousness we have always experienced. And as I have laid out above, this isn’t a matter of digital mind control (at least not necessarily) as these processes verge into a state of topological interdeterminacy.
Hmmm…is this still making sense? I think what I’m trying to say is that when we begin to realize that computers are integrating into a cognitive processes at a level below and/or beyond our conscioiusness (i.e. doing our thinking for us to some degree), we get this uncanny paranoid feeling that perhaps our minds are not our own, that we are not whole and cohesive. This is the paranoia that The Matrix capitalizes on not only in its plot but in its production as well. However, what I am trying to suggest here is that this externalization of our cognitive processes is nothing new, that in fact (as I argue in the first part of my book) our consciousness emerges precisely because we externalize some of our thought processes in the form of symbolic behaviors (some 30,000 years ago or more).
OK, maybe that’s a little better.
Finally, I get to subjectivity and the concept of the cyborg. This is ultimately where any strategy for political activity will need to be articulated. Why is that? Well, b/c although individuals can undertake political acts, those acts are only political inasmuch as they occur within a communal context that can view them as such. In order to be a member of a community an individual needs some kind of relational interface to exchange symbolic values (i.e. to communicate): that interface is subjectivity. I connect the cyborg to the concepts of haunting and becoming. The cyborg is the haunting of the machine in the human and the human in the machine, an anamolous connection between these two packs through which affects can pass from one to the other. Haraway is attracted to the figure of cyborg as a political strategy because the cyborg figures out how to take different parts and make them work together within forcing them to become part of some totalized whole. As I have suggested above, we are recognizing the differentiated and distributed condition of our cognitive processes and consciousness: the cyborg becomes a apt subjectivity for such a condition.
So what does this have to do with “new media rhetoric”? How does this work in the “real world”? And how does it fit in with the concerns of rhetoricians and other academics facing the challenges of new media and the new university (a la Bill Readings and others)?
How do you like the sound of “read the book and find out”? Most of this is the subject of the third and final section of the book. However, I am planning on ending this section with a more “real world,” classroom-type example: students creating web-based assignments. In the past I have had my students produced “hypermedia research projects:” nonlinear sites that incorporate web and print research, graphics, images, and sometimes other media. Typically when they do this they will copy images and graphics from other websites, imitate other web page designs (sometimes to the extent of copying code or cascading style sheets), and they will help each other with technical problems designing their pages. All of these activities are roughly equivalent with what we might term academic dishonesty if we were dealing strictly with a conventional written essay. And often I will encounter a problem with one of these assignments where a student has not only copied an image from a website but text from the website as well. Why is one acceptable and the other not? More complexly, students can use Photoshop and related applications to automate any number of modifications they might want to make to media objects; put differently, they let the computers do the thinking for them. Would I accept my students having a computer do their writing for them? Finally, when I go to evaluate their project, any number of variables make my encounter with their project unique–variables that are not entirely in the control of the designer. For what can I hold my students accountable when I grade them? What counts as “their” work, “their” thinking?
Here is where we have to begin by realizing that they are distributed networks of cognition. I know they’ve all done Google searches to find material for their projects. How much thinking has Google done for them? How much information processing has Dreamweaver or Photoshop or OS X or any other application done for them? How much have they relied on various online communities to give them help with their design? The point is that the answers to these questions are irrelevant. We need instead to recognize ourselves as channels for such informational processes. Of course I am interested in the kinds of conscious choices my students made in constructing their projects. How they articulate those choices and their ability to choose between them then becomes a factor of their subjectivity. Politically speaking, in my pedagogy, I want to open my students up to the potential of distributed cognition by stripping away their connection to outmoded concepts of thought, originality, intellectual property, and individual learning. Sometimes they giggle a little because they think I’m letting them cheat, and I suppose in someone’s definition I am. But in my definition, I am introducing them to a new way to interact in the informational universe.




Leave a comment