I presented my paper at the CHATS conference yesterday and now I’m blogging the text of it. It was a small audience, but the paper was well-received. In some respects this conveys some of the key points in my book manuscript that I’m currently revising, particularly the final section of it.

In “The University Without Conditions” Derrida recognizes that, “This new technical ‘stage’ of virtualization (computerization, digitalization, virtually immediate worldwide-ization of readability, telework, and so forth) destabilizes, as we have all experienced, the university habitat… Where is to be found the communitary place and the social bond of a ‘campus’ in the cyberspatial age of the computer, of tele-work, and of the World Wide Web?” (2002, 210). While this might be read as the typical humanistic bemoaning of technology, Derrida has a more radical insight, observing that, “what has been upset in this way is the topology of the event, the experience of the singular taking place” (210).

My presentation discusses this technoscientific mutation of time and space termed event topology as it speaks to the future of humanities, particularly English Studies. Far from seeing this mutational process as deterministic (and hence already settled), I argue, broadly speaking, that “the experience of the singular” offers substantial opportunities for the emergence, the becoming, of indeterminable subjectivities and communities, opportunities that offer the humanities a chance to engage new media in a significant intellectual and ethical manner.

I will approach this rather abstract concept of “event topology”
through the interlocking tropes of virtuality and haunting. In
conventional parlance we might speak of the humanities as being haunted
by the prospect of information technologies, including new media, and
what they might portend for the humanities’ traditional pedagogical and
scholarly projects, such as the instruction in a particular mode of
print literacy delivered through literary studies. In effect, our
present actions are haunted by a ghostly image of the future, what one
might call a kind of “virtual prognosis,” that is in turn haunted by
past specters, by our disciplines’ past relations to technologies and
technoculture and past relations with texts constitutive of our
discipline. In this sense, a virtual haunt is a specter of our
technological future.

In the conventional ghostly haunting, the encounter with the ghost
is generally an uncanny one. As Freud explains, the uncanny experience
is an encounter with something that is familiar to us but that we have
largely defamiliarized through an ongoing process of repression. Our
repression of the uncanny may often be perceived as necessary for
maintaining the illusion of our consciousness. The German media
philosopher, Fredrich Kittler, sees the uncanny as a common feature of
early cinema. In the camera’s reproduction of our uncannily mechanical
motions (accentuated perhaps by early film’s hurky-jerkiness), one
encounters the fragmentary, mechanical, unconscious motions of the body
habitually repressed by the imaginary cohesion of consciousness. It is
a fragmentation that is exponentially intensified in digital cinema,
where virtual reality is produced through millions of binary digits
rather than thousands of chemical frames. In contrast, the literary has
served as a mechanism for reaffirming cohesive identity. As Kittler
writes,

 

The fact that the minimal unevenness between stroke and paper can
store neither a voice nor an image of a body presupposes in its
exclusion the invention of photography and cinema. Before their
invention, however, handwriting alone could guarantee the perfect
securing of traces…And what applied to writing also applied to reading.
Even if the alphabetized individual known as the “author” finally had
to fall from the private exteriority of handwriting into the anonymous
exteriority of print … alphabetized individuals known as “readers” were
able to reverse this exteriorization. (1999, 9)

In other words, writing and reading literature have served as a
means to communicate between two cohesive, “alphabetized” individuals:
the author and the reader. And though our continuing adherence to the
“intentional fallacy” separates specific authorial intent from reader
interpretation, literary education remains founded on the perception of
reading literature as a humanistic experience, as a means for
connecting the reader to other humans. This value has long situated
literary study as a panacea for the dehumanizing effects of industrial,
and now information, technology. Obviously, however, literary studies
is simultaneously founded upon the industrialization of print and has
been intimately involved in the education of the professional and
managerial classes of industrial and information economies.

The appearance of the specter of virtuality results in English
Studies’ uncanny encounter with its habitually repressed relationship
to technology. Faced with information technologies on a daily basis, in
our classrooms and offices, and probably in our homes as well, it
becomes increasingly difficult to overlook the effect technology has
had upon literacy. One must realize that the tension here stems not
simply from admitting that computers are changing literacy. That is a
common enough lament. The tension comes in recognizing that the
foundational texts of our discipline, the literary canon for example,
are also shaped by technologies; it comes in recognizing a fundamental
shift in our understanding of textuality and the “literary.” Of course,
the appearance of new media is a significant factor here as well. Not
only does new media result in new forms of textuality and new forms of
narrative online or in video games, but even books themselves have been
physically altered by the fact they are now produced via these
technologies. Contemporary books are new media and often they ask to be
read in new ways that reflect their remediation via virtuality. It may
be the case that people read now more than they did a decade ago, but
they read text-messages, e-mail, blogs, and other websites.

In part, what this means is that the experience of reading has
been shaped by the technologies readers inhabit and the frequency with
which they inhabit those textual spaces. This is also haunting. A haunt
is also a place that one frequents, a subjective association with a
space that is productive of identity. A haunt is not only a space or
location but a time as well, or more precisely a timing pattern, a
frequency. A virtual haunt thus might also be a kind of virtual space,
a community that extends a subject position to the one who haunts it.
In this case though, a haunt is a familiar and comfortable place. This
situation of the haunt as simultaneously an eerie and a comfortable
place fits effectively with Freud’s unheimlich. However, I’m not
satisfied with that answer, though I think it leads us in the right
direction. We need to look more directly into the processes of
subjectivity from which the discomfort of the uncanny seeks us to turn
away.

Michael Heim describes the discomfort of virtual reality as
Alternate World Syndrome (AWS), “the relativity sickness that comes
from switching back-and-forth between the primary and virtual worlds”
(1998, 182).  Heim explains that AWS is not unlike earlier technology
sicknesses such as simulator sickness, motion sickness, and jet lag.
It occurs when there is a substantial disconnection between various
sensory inputs into the body: for example, our eyes register movement
but our inner ears do not. Heim insists this is more than a
technological glitch, that “AWS concerns not the system per se, but the
system within the broader context of world entrance and exit” (185).
Both the uncanny and AWS suggest the disconnection between conscious
and embodied perception. The uncanny results from a conscious encounter
with repressed embodied processes while AWS is produced when the body
receives contradictory sensory information from our conscious
investigation of a virtual space. Getting at the discomfort created by
this disconnection begins with articulating a sense of space that both
consciousness and unconscious embodied processes share.

Typically, we think of the conscious mind as articulating
space in Cartesian terms. Objects in Cartesian space are finite and
locatable, and the rational subject functions through controlled and
controllable interactions with these objects. However we also employ an
embodied mode of navigation, which we term proprioception.
Proprioceptive navigation suggests that we do not interface with space
as rational fixed coordinates but rather through an embodied perception
that takes one task at a time. For example, a walking robot built on
the Cartesian model would have a centralized program for motion, a set
of sensors that would map out the room in which the robot was located,
and a navigation program that would plot the route the robot would take
from start to finish. Alternately, a robot built on a proprioceptive
model, perhaps like those built by Rodney Brooks, distributes cognition
throughout the robot. Rather than a walking program, each leg is
programed to balance itself. In effect, the robot “learns” to balance
and walk each time it moves. After all, as anyone with a small child
can tell you, walking is little more than controlled falling. Space is
similarly navigated. It is not necessary to know everything in order to
move.

What does help proprioception is repetition or frequency. It’s
your ability to make your way through your home in the dark. It’s the
experience of finding your way from work to home without remembering
the trip. In short, proprioception is your navigation of your haunts.
It is the cybernetics of haunting.

And proprioception is not simply about how we find ourselves
in physical spaces (though this is the literal meaning), but how we
find ourselves period—in psychic spaces, political spaces, wherever.
This topological approach to space does not suggest that Cartesian
spaces do not exist. Instead, it is a conceptualization of how those
spaces emerge: Cartesian coordinates are our conscious attempt to
organize and segment an otherwise undifferentiated, continuous
topology. The linking of Cartesian and topological spaces creates a
hinge where our proprioceptive experience of space is doubled by our
cognitive mapping of our location. Our map of Cartesian coordinates,
dominated by various landmarks, lays over our embodied sense of
direction, reinforcing the latter. We can see how these relations work
in a typical experience that causes the disjunction of proprioception

and visual-cognitive mapping: getting lost. This might occur when
emerging from a subway, our proprioceptive sense might lead us to
expect to emerge on one side of the street, only to discover that we
are not where we expected. In this moment we need to access our
cognitive map and re-place ourselves within it. Or, inversely, as Brian
Massumi observes, “the first thing people typically do when they
realize they are lost and start trying to reorient is to look away from
the scene in front of them, even rolling their eyes skyward. We figure
out where we are by putting the plain-as-day visual image back in the
proper proprioceptive sea-patch” (Massumi 2002, 182). In such scenes,
we find ourselves reawakening; space shifts around us (and/or we shift
in space).

Perhaps becoming-conscious is a similar awakening. Massumi
explains this through a reference to an experiment with cognition.
Volunteers had their brain waves measured by an electroencephalograph
(EEG) machine. They were then asked to flex a finger at a moment of
their choosing and to note the position of a dot on a spatial clock.
0.2 seconds passed between the moment of choosing and the actual
flexing of the finger, but the EEG machine measured a marked increase
in brain activity 0.3 seconds before the moment of choosing (2002, 29).
This suggests that our conscious choice does not initiate the
decision-making process but rather marks only one moment in it. From
this Massumi contends, “Will and consciousness are subtractive. They
are limitative, derived functions that reduce a complexity too rich to
be functionally expressed” (29). That is, consciousness marks the
moment when a multiplicity of potential, virtual intensive properties
becomes specific material extensive properties: the thought to flex a
finger becomes.

Massumi presents thought emerging in an event topology, a
moment in space-time becoming. This becoming is already anticipated in
the bifurcated concept of the virtual. On the one hand, in the sense in
which I have been using it, virtual references “virtual reality” the
technological production of a simulated environment such as the
internet, a video game, or digital effects in a movie. On the other
hand, the virtual references an indeterminate state, a state of flux.
When we think about the virtual haunt in terms of this latter
understanding, we come to see it as pointing to an indeterminable,
fluctuating space inscribed upon by frequencies. It is in this virtual
haunting that one encounters the event topology of the singular that
Derrida connects with the coming of the technical stage of
virtualization.

This intersection of the subject with the event is
articulated in Deleuze and Guattari’s use of the term haecceity
(Deleuze and Guattari 1987, 296). Typically we conceive of ourselves as
subjects who exist through time; we move through events, creating
memories. The haecceity conceives everything as being produced in the
moment, through the event itself, as an alternate mode of individuation
to that of the ideological subject. In these terms, “you are longitude
and latitude, a set of speeds and slownesses between unformed
particles, a set of nonsubjectified affects” (262).

The haecceity also marks the site of becoming-digital within
the material, analog systems of computer hardware. It is important to
remember that the processes of immanence, affective intensity, and
representative capture do not exist solely in psychological or human or
even “living” entities, but rather as material processes. The
computer’s voltage intensities are partially captured in the
representative coding of programming languages, but in the haecceity,
the cusp between the virtual and actual, the computer is always
becoming-digital. Here the potential for pluridimensional “noise” to
mutate into other information always exists, just as it does for
conventional writing. Writers of new media “texts” participate then in
a haecceity that includes computer technology. Consciousness emerges
from immanent nonbeing in the moment, as an element in the multiplicity
of the haecceity, as intensive affects that move toward the future
plane of reference where they are articulated as forms.

So what might such modes of individuation portend for the
arguably public space of the university? The modern university
functioned through its twin values of Truth and nationalism to create
an institutional community that produced citizen workers. Like all
communities, the university functioned by categorizing individuals
according to traits that existed in a fixed relationship to one
another: student, professor, biologist, historian, etc. While
undoubtedly performing this ideological operation, the university also
provided a space that was isolated from, though certainly not unrelated
to, the marketplace. Both the historian and the biologist were licensed
to purse and speak Truth about their discipline because they
participated in a larger, indeed universal, system.

Thus the uncanny specter English Studies encounters in
virtuality does not come only for literature. It comes for the entirety
of the disciplinary community and its knowledge. It presents us with
the fragmentary, nonhuman processes of cognition. As subjects, we are
partial entities, stuttering in a broken language, banging around with
our tools, and constructing snippets of code that we compile, all of
which is occluded within the illusory cohesion of personal and social
identities and histories. The challenge comes in reconciling ourselves
to whatever we become.

And thus by way of conclusion, I want to point to Giorgio
Agamben’s contention that “the novelty of the coming politics is that
it will no longer be a struggle for the conquest or control of the
State, but a struggle between the State and the non-State (humanity),
an insurmountable disjunction between whatever singularity and the
State organization” (84). It would be an error to imagine the Internet
as a potential saving public sphere. Even if we were to overcome the
digital divide, even if some future Internet were to become an equally
accessible and egalitarian space for citizens, it could never serve the
messianic role of saving the state, of exorcising the specter of
commodification. Instead, we must abandon our notions of time and space
and thought as illusory apprehensions, as failed attempts to grasp the
singular materiality of the event, and turn instead toward the task of
building communities not on the basis of common identity but on the
acceptance of whatever nonidentity, the fluctuating becoming-conscious
in the haecceities of distributed cognition.

Our struggles to learn to live with new media often are
articulated as “practical” matters: the cost of technology, the labor
intensive matter of keeping current and so on. Otherwise, they are
discussed in terms of their potential dehumanizing effects or
deleterious impact on teaching. I’m not interested in shutting down
these conversations. However, I believe they are operating upon the
occlusion of the concepts I am discussing today. If we are to learn to
live and work with new media, networks, and information technologies in
general, we will need to understand the material and symbolic
environments of distributed cognition that they engender. We will need
to investigate how the indeterminate spaces of one type of virtuality
become articulated in the technological virtual plane of reference in
which we represent our world and ourselves to ourselves. Specifically,
by exploring the haunting of the virtual-technological by the
virtual-indeterminate, we might come to understand the material
processes by which media, or more generally what one might call
“symbolic behavior,” interface with the body in the production of
thought. Only by understanding these processes of affective production
can we then investigate the ideological processes that establish
subjectivity and hence our function in cultural spaces like
universities.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending