Categories
digital rhetoric

Composing in and with data-driven media #4C18

china social credit.jpg
Credit: Kevin Wong, Wired

[A text version of my presentation from yesterday]

At the beginning of the century, Lev Manovich identified five principles of new media the last of which he termed “transcoding.” Transcoding describes the interface between cultural and computational layers. In part, transcoding observes that while digital media objects appear to human eyes as versions of their analog counterparts–images, texts, sounds, videos-they are also data and as such subject to all the transformations available to calculation. Such familiar transformations include everything from cleaning up family photos to making memes to share online or even changing the font in a word document. As the quality of video and the computational power available to average users increases, such transformations have also come to include altering videos to make people appear to say things they haven’t said or even putting someone’s head on another person’s body.

Perhaps unsurprisingly much of the early exploration with this video technology has been with fake celebrity porn. That’s certainly NSFW so I’ll leave it to you to investigate that at your own discretion. The point is that the kinds of media we are able to compose is driven by our capacity to gather, analyze, and manipulate data.

The other part of transcoding, however, points to the way in which data-driven, algorithmic analysis of user interactions shapes our experience of the web from Netflix recommendations to trending Twitter hashtags. In recent months, the intersection of these two data-driven capacities-the ability to create convincing fake media and then spread it across online communities-has become the subject of national security concerns and national political debate. I’m not here to talk specifically about unfolding current events but they do offer an undeniable backdrop and shape the situation in which these rhetorical processes can be studied.

Certainly there will be technical efforts to address the exploited weaknesses in these platforms. However, computers are by definition machines that manipulate data, and, as long as these machines operate by gathering data about users and using that data to create compelling, some might say addictive, virtual environments, there will be ways to exploit those systems. After all, one might say these digital products are designed to exploit vulnerabilities in the cognitive capacities of humans, even as they also expand them.

Within this broad conversation, my specific interest is with deliberation. Classically, deliberative rhetoric deals specifically with efforts to persuade an audience to take some future action and, as Marilyn Cooper observes, “individual agency is necessary for the possibility of rhetoric, and especially for deliberative rhetoric” (“Rhetorical Agency” 426). This is not a controversial claim. Essentially, in order for deliberative rhetoric to work, one’s audience must have the agency to take an action. More generally, deliberation requires a cognitive capacity to access and weigh information and arguments. Regardless of whether those arguments come in the form of logical deductions or emotional appeals, the audience still requires the capacity to hear, evaluate, and act on them. However, in emerging digital media ecologies the opportunities for conscious, human deliberation are increasingly displaced by information technologies. That is, machines make decisions for us about the ways in which we will encounter media. In some respects, one can view this trend as inevitable and benign if not beneficial. Without Google and other search engines, for example, how could any human find information on the web? One might even look at recommendations from a subscription media streaming service like Netflix or an online store like Amazon as genuine, well-intentioned efforts to improve user experience, though clearly such designs also serve corporate interests. Similarly, changes to social media experiences such as Facebook’s massaging of what appears on one’s feed or the automatic playing of videos might improve the value of the site for users or might be deliberate acts designed to sway future user actions. Ultimately though, the increasing capacity of media ecologies to record and process our searches, writing, various clicks and other online interactions-to say nothing of our willingness to have our bodies monitored from biometrics to our geographic movements-produces virtual profiles of users which are then fed back to them and reinforced.

To address these concerns, drawing upon a new materialist digital rhetoric, I will describe a process of “distributed deliberation.” This process references the concept of distributed cognition. Distributed cognition is not meant to suggest machines doing “our” thinking for us but rather to describe the observable phenomenon in which humans work collectively, along with a variety of mediating tools, to perform cognitive tasks no individual human could accomplish alone. Distributed deliberation works in the same way. It is useful to think about this in Latourian terms. That is, through the networks in which we participate we are “made to act.” That is not to say that we are necessarily forced to act but rather that we become constructed in such a way that we gain the capacity to act in new ways. For example, through their participation in a jury room or a voting booth citizens are made to deliberate in ways that would not otherwise be possible. However, that is a little simplistic. While we may only be able to vote in that booth, there are many agents pushing and pulling on us as we deliberate. Typically it is far more difficult to discern the direction in which agency flows. As Latour observes, “to receive the Nobel Prize, it is indeed the scientist herself who has acted; but for her to deserve the prize, facts had to have been what made her act, and not just the personal initiative of an individual scientist whose private opinions don’t interest anyone. How can we not oscillate between these two positions?” (An Inquiry into Modes of Existence, 158-9). That is the oscillation between facts demanding certain actions and the agency of the scientist. For Latour, the resolution of this oscillation lies ultimately in the quality of the resulting construction, which, of course is just another deliberation and it is one that requires an empirical investigation, the following of experience. To put it in the context of my concern, as a Facebook user hovering the mouse over the buttons to like and then share a news story placed into her feed, how do the mechanisms of deliberation swarm together and make the user act? Is the decision whether to share or not a good one? Furthermore, while we can and must pay attention to the experience of the human user, so much of the work of deliberation occurs beyond the capacity of any human to experience directly. As such, in charting distributed deliberation we must also investigate the experience of nonhumans, which will require different methods, and that’s where I will turn now.

Understanding the specific operation of those nonhuman capacities is a task well suited to Ian Bogost’s procedural rhetoric, which he describes as “the art of persuasion through rule-based representations and interactions rather than the spoken word, writing, images, or moving pictures. This type of persuasion is tied to the core affordances of the computer: computers run processes, they execute calculations and rule-based symbolic manipulations” (Persuasive Games, ix). Though Bogost focuses on the operation of persuasive games as they seek to achieve their rhetorical goals through programming procedures, he recognizes that procedural rhetoric has broader implications. Selecting a movie or picking a route home with the help of Fandango or Google Maps may be minor deliberative acts, but they offer fairly obvious examples of how deliberation can be distributed.

Yelp, for example, combines location data with a ratings system and other “social” features such as uploading reviews and photos, “checking in” at a location, and providing map directions. These computational processes compose a media hybrid and expression with the capacity to persuade users. Certainly one might be persuaded by the text of a review or a particularly pleasing photo; text and image play a role here as they might in a video game. But the particular text and photos the user encounters are the product of a preceding procedural rhetoric that decides which businesses to display. It is not only restaurants and other business that are reviewed but the reviews and reviewers as well, which serve as part of a process that determines which among the dozens or hundreds of reviews a business might receive that one is first to see. In the case of Yelp, users write reviews and rate businesses on a 5-star scale. Yelp then employs recommendation software to analyze those reviews and weigh them. Does it matter that they claim the recommendation software is designed to improve user experience and the overall reliability of the reviews on the site? Maybe. What is key here, however, is that such invisible procedures undertake deliberations for us. The fact that it would be practically impossible for users to undertake this analysis of reviews independently or that users are still presented with a range of viable options when looking for a local restaurant, for example, does not alter the role that such procedures perform in our decision-making process. In Yelp one finds a digital media ecology that includes juxtaposed multimedia (e.g., photos, icons, text, maps), computational capacities (e.g., linking, searches, location data, data entry for writing one’s own reviews), algorithms or procedures (e.g., ranking businesses, evaluating and sorting reviews), and media hybrids (e.g., combining with mapping applications to provide directions or linking with your phone to call a business). Indeed one might look at Yelp itself as a media hybrid with its own compositional processes, rhetorical procedures, and genres.

The softwarization of media did not take off fully until personal computing hardware was powerful enough to run it. Social and mobile media obviously rely on the various species of smartphones and tablets. They require the hardware of mobile phone and Internet networks and server farms. Whole new industries and massive corporations have emerged as part of this ecology, and this means people: HVAC technicians keeping server farms cool, customer service representatives at the Apple Genius bar, engineers of all stripes, factory workers, miners digging for precious metals in Africa, executives, investors, and so on. It also involves a shifting higher education industry with faculty and curriculum to produce research and a newly-educated workforce, an infrastructure that relies upon these products to operate, and students, faculty, and staff who feed back into the media ecology. In short, a media ecology cannot be only media just as rhetoric cannot only be symbolic. While digital media ecologies create species with unique digital characteristics, they cannot exist in a purely digital space anymore than printed texts can exist in a purely textual space.

As such, whatever rhetorical power the algorithmic procedures of software might have, their most powerful rhetorical effect might lie in the belief users have in the seeming magic of a Google search or similar tools. However, as Bogost observes, algorithms are little more than legerdemain, drawing one’s attention away from the operation of a more complicated set of actors:
If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work…SimCity isn’t an urban planning tool, it’s a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like. (“The Cathedral of Computation”)
Indeed, in some comic bastardization of Voltaire one might say that if algorithms didn’t exist that we would have to invent them as a means of making sense of media ecologies and our role in them. That is, in the face of a vast, unmappable monstrosity of data, machines, people, institutions, and so on intermingling in media ecologies, the procedural operations of software produce answers to questions, build communities, facilitate communication, and generally offer responses to our requests, even as they shape those questions, communities, communications, and requests. In other words, the distribution of deliberation and other rhetorical capacities among the human and nonhuman actors of digital media ecologies is necessary and inevitable. Describing and understanding the complexities of these relations as they participate in our deliberations, rather than simply celebrating or bemoaning the apparent magical abilities of the tools we employ becomes the first step toward building new tools, practices, and communities that expand our rhetorical capacities.

It’s worth noting that powerful entities are already intentionally at work on these goals. A year ago, when the concerns about Facebook and fake news were really taking off, Mark Zuckerberg published a manifesto declaring that “In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.” In a related vein, as has been widely reported, the Chinese government has a different idea of the role distributed deliberation can play in its creation of a social credit system. I don’t know about you, but I’m not especially sanguine about the notion of Facebook engineers building the social infrastructure for a global community. I’m even less enthused about the possibility of other nations importing these Chinese practices, as if we do not live in a thoroughly monitored environment as it is.

I won’t pretend there are any easy answers, any simple things one can slip into a lesson plan. The task begins with recognizing the distributed nature of deliberation and describing such processes to the extent that we can. This includes paying attention to the devices we keep closest to us and understanding the particular roles they play. And it means inviting those nonhumans into our disciplinary and classroom communities. Just as universities, departments, and faculty are capable of creating structures that encourage conformity to existing rhetorical and literate traditions, they might conversely create structures that are more open to these investigations. This might mean finding ways to use rather than restrict access to digital devices in the classroom and creating assignments that push students to creative uses of the collaborative and distributed cognitive potential of digital networks rather than insisting on insular and individualized labor. It might mean asking questions that cannot be answered by close reading or setting communication tasks that cannot be accomplished by one person writing a text. From there the classroom has to proceed to create solutions to these tasks rather than assuming the answers already exist, which is not to suggest that many answers might not be readily available but the emergent quality of digital media means, in part, that new capacities can always been considered.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.