We’ve been reading Manovich’s Software Takes Command in my media theory course. The driving question of the book is “what happens to media after software?” If that question strikes you as McLuhanesque, then I would say you are on the right track. The book has a historical element. It begins in the 60s and 70s, looking particularly at the work of Alan Kay and moves into the 80s and 90s where the process of “softwarization” as Manovich terms it starts to take hold. Here he gives special attention to Photoshop. The book ends fairly close to the present with some discussion of social media. However it would be inaccurate to think of the text as media history. It is really more conceptual in its efforts to understand the impact of software on how we understand media. To that end, I am interested in two particular ideas: software evolution and software epistemology as Manovich calls them.
Manovich’s discussion of evolution comes with the usual caveats that he is using the term metaphorically and that he doesn’t mean that software develops by the same mechanisms as biological entities. Fine. I didn’t really think that Photoshop and InDesign were getting busy on my hard drive anyway. On the other hand, the “metaphor defense” is a little unsatisfying. Isn’t it possible to say that evolution is an more abstract concept and that we can have different kinds of evolution (software and biology for example) without saying that such claims are any more metaphorical than language inevitably is?
Through the process of softwarization, older media (e.g. a photograph and a watercolor painting) get simulated. As Manovich points out, this is not a simulation of the material but a simulation of the tools and interfaces. So for example, with digital video, software simulates the ways in which we were able to interact with traditional film (playing forwards and backwards, cutting, panning, zooming, etc.), but we don’t get the material film. In creating these simulations for a variety of media, software blends them together through the use of common data structures. (n.b. And here, Manovich is using the term data structure different from its use in computer science.) By data structures, Manovich is referring to common file types, so that the photograph and the watercolor painting both become jpegs. That’s all fairly straightforward. Here is where this gets interesting though. Manovich argues:
software simulation substitutes a variety of distinct materials and the tools used to inscribe information (i.e., make marks) on these materials with a new hybrid medium defined by a common data structure. Because of this common structure, multiple techniques that were previously unique to different media can now be used together. At the same time, new previously non-existent techniques can be added as well, so long as they can operate on the same data structure.
So in Photoshop we can use techniques that simulate painting techniques on photographs and visa versa. I’m guessing anyone who has used Photoshop has messed around with all those filters. This is where we begin to see the hybridization of media. Two media that were materially distinct and had techniques that were tied to that distinct materiality are now able to combine to produce new media offspring.
That’s software evolution; now let me turn to software epistemology. Manovich writes, “Turning everything into data, and using algorithms to analyze it changes what it means to know something. It creates new strategies that together make up software epistemology.” Here I am reminded of DH, of course. But I would put it this way: our knowledge of an object isn’t in the object to be discovered but produced through the network of relations by which we encounter that object. This is as true for a novel or photograph as it is for a marmoset or a black hole. There’s no doubt that today that network of relations includes software as a powerful actor. Furthermore the hybridizing process of software evolution does mean that techniques can cross the material boundaries of older media. I.e., we can bring new techniques for creating knowledge about texts, and clearly we have.
My interests are more heuristic than hermeneutic however. So my curiosity now tends toward wondering how/if Manovich’s insight gives us a way to invent new media hybrids, particularly for the purposes of our academic work, though I am generally interested in the activity. My interests tend toward the mobile and the social, the everyday uses of media available to students and faculty. When I start to think about such matters it’s easier to see that it is not just about software. It’s easy to see how one could combine several commonplace social media applications–a collaborative composing tool like a wiki, realtime communication like twitter or google hangouts, and asynchronous discussion and documentation on a blog–to engage in a very different kind of scholarly and/or pedagogical activity than the one that current predominates in the humanities. With location-based data and smarter objects we can provide context-sensitive information. And yes we can find people out there doing those things, but doing it requires more than there being a new media hybrid with the right affordances. It also requires a pedagogical and rhetorical shift. It requires an epistemological shift where I change my mind about what’s worth knowing and what are useful ways to construct that knowledge.
I’m encountering this right now in a conversation on campus about eportfolios. Sure there are many valid questions to ask about eportfolios and we can point to institutions where they haven’t been successful, where the students and faculty find them counterproductive and onerous. On the other hand, I can point to more institutions where students and faculty find giant lecture courses and multiple choice tests counterproductive and onerous. Can we get rid of those too? Eportfolios are a media hybrid born of the recognition that our students work is largely digitally composed, regardless of discipline, and that we can thus put all that work in one place where we can take advantage of certain media-independent processes. The institution might be most interested in the assessment processes, but I am most interested in the sharing and discussing processes. But this is a clear example of an epistemological shift. Instead of putting effort into creating old media types with their historical epistemological constraints, one decides to do new kinds of work and value a new kind of knowledge.
2 replies on “software evolution and software epistemology”
[…] software evolution and software epistemology […]
I do like a broad definition of evolution, but it’s not my term of choice for this.
Digitization isn’t so much a hybridization or evolution as it is an abstraction, removing media from the physical to the conceptual, and shedding physical limitations along the way. It’s important in a different way than breeding a new kind of dog, or grafting 2 plants together. It’s formless.
Once something is abstracted into a computer, regardless of whether it uses the same file format or not, it is just ones and zeros. The abstract world of math is straightforward to apply to it in any direction. Even if two media types were in separate file formats, they are a conversion algorithm away from being directly comparable. The point about common file types being important feels like a throw-back to concepts of the physical world where form mattered.
To expand on your Photoshop example. Yes, you can use a spray-can algorithm on a painting or a photo. But you can also feed your digital photo into an algorithm to translate it into music and play it through your headphones.
This also opens the door for anything in math to be translated back into media. A math formula can become a picture for example. For an interesting time, check out implementations of fractals which do exactly that.
Of course, you always could map the physical media to math, it was just too unwieldy and difficult to get any meaningful use out of it for anyone to bother.