In the wake of 2016 US presidential election, many questions were raised about the role of Facebook in disseminating fake news. Mark Zuckerberg’s response, posted to Facebook, points to the challenge fake news presents.
Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves. (“I want”)
Perhaps unwittingly, Zuckerberg reveals the fundamental tension between content that users find “meaningful” and content that is factual. In addition, what we might traditionally call deliberative rhetoric, essays that employ evidence to make arguments about what should be done fall into the nebulous space of “opinion,” a space made even more uncertain as the facticity of the evidence supporting those arguments is often uncertain. Certainly one might reasonably argue that whatever precession of deliberation digital media accomplish, as readers we remain capable of consciously analyzing and evaluating the credibility of media we encounter. We have not yet been brainwashed to accept whatever Facebook tells us: the ability of users to reject, almost reflexively, content that appears at odds with their own ideological convictions demonstrates that. However Facebook’s mechanisms seek to discern what each individual user might find “meaningful” and foreground that media, reinforcing the echo chambers users might deliberately create by hiding or unfriending users who disagree with them. Rather than mitigating our cultural and ideological biases, applications like Facebook work to show users a world they will “like” with the primary goal of keeping those users’ attention on the website.
But the deeper problem is this. Even if we imagine all this as a primarily technological problem that will eventually be fixed by engineers (and I agree that requires a prodigious feat of imagination), we are still left in a situation in which machines have taken responsibility for a significant amount of the work that we have termed deliberation. And perhaps we are forced to acknowledge that given the sheer speed and volume of information, there is no other option for us.
This is the rhetorical condition that I term distributed deliberation, a term that nods toward Edwin Hutchins’ concept of distributed cognition. And what I want to focus on today is how one might go about inventing proleptic or anticipatory responses to distributed deliberation that ultimately result in an expansion rather than contraction of our rhetorical and cognitive capacities. And I’m going to take up a new materialist method and consider what it might offer us.
A new materialist digital rhetoric might describe a user-based, distributed-deliberative prolepsis in terms of population thinking. I going to discuss this as three steps. The first step is to reorient one’s concerns away from the individual human subject and conceive of this challenge as an ecological one involving a constellation of human and nonhuman actors. Casey Boyle observes this in his discussion of smart cities: “Where former understandings of democratic organization relied largely on the techniques of communicating language for deliberating civic activity, the [smart city] looks to sensor technologies and big data methodologies to track and nudge realtime movements and conditions” (“Pervasive” 270-1). That is, civic concerns are not worked out through dialogue among individuals but rather through an expansive, data-intensive, collective process. In some sense, humans have always been in populations: as a species and as family units, tribes, nations, etc. As Boyle recalls, even in Aristotle, one finds a discussion on limiting the size of a democracy’s population to one that can all hear a single speaker and be seen by that speaker, recognizing that “civic organization depends not only on persuasive debate but also on the means for circulating information” (270). As such, the shift here is not that we have suddenly become a population but rather that we have become part of a new population. In the simplest terms, one might call this a network-user population.
Once one begins thinking in terms of a network-user population, the second step is to describe that population’s functioning. The population of social assemblages always includes heterogeneous elements, both human and nonhuman. However, once formed, an assemblage begins to act as a means for shaping and homogenizing the agential capacities of its components as processes of parameterization influence the degree of territorialization or deterritorialization, coding or decoding, present in the population. For example, the heterogeneity of humans on a college campus are homogenized into populations of faculty, students, and staff who have different capacities in relation to the campus and who might each be addressed as a separate population and modified according to certain parameters: when an institution changes general education requirements, those changes likely affect all three groups but in different ways as members of those distinct populations. A college campus is a comparatively heavily territorialized and coded social assemblage: a military base would likely be more so while a local farmers’ market would likely be less so.
At first glance, social media networks would appear to be deterritorialized and coded. They are global in operation, allowing virtually any person to join as a user, and they are obviously coded, not only in the literal sense of being made from programming code, but in the conceptual sense as well through the algorithmic processes that I have been discussing here. However assemblages are rarely just one way. Instead they have tendencies moving in all directions along these axes, so while social media may be deterritorialized and coded, they also have tendencies toward territorialization and decoding. For example one might understand the strong territorial boundedness of such sites in terms of their cybersecurity. While social media users represent a heterogeneous group of humans in conventional macrosocial terms, these sites homogenize those humans as users and gives each such user equal capacities on the site. These tendencies to create secure borders and homogenize users represent forces of territorialization. In addition, while social media sites are certainly coded, conventionally coding in assemblages operates through written rules and procedures governing all manner of behavior. Despite the terms of service, as is self-evident to any social media user, there is little regulation over the behavior or expression of users. Almost anyone can join Facebook, those users can form almost any kind of internal community or friend network, and they can share almost any kind of media and write whatever, whenever, and wherever they want. Compare this, for example, with the restrictions on expression in the traditional classroom or workplace environment. In this respect one might say that the programming code behind Facebook and similar sites has a decoding effect because of its capacity to process a wide range of user actions and inputs and feed them forward into the production of a customized user experience. In the same way, one might think of social media populations as emerging from a series of decoded, microsocial actions rather than top-down, coded limitations like laws. The end result is a population that is both territorialized and decoded. That is, as is directly observable by almost any Facebook user, one finds oneself in an assemblage with few restrictions on expression but is nonetheless homogeneous, especially if, as a user, one goes about creating an “echo chamber,” as many users do.
If we can understand ourselves as populations in an assemblage of network-users with certain tendencies toward territorialization-deterritorialization and coding-decoding, then the third step is describing the agential and rhetorical capacities that become available to us through those collective bodies, even as other capacities we have had historically either become less accessible or less effective. There is an array of possible tactical resistances available to network-user populations from hacking to culture jamming. One might also produce alternative technologies and applications and effectively create new user populations. In DeLanda’s terms, these would be efforts toward deterritorializing and decoding the population to create greater heterogeneity. In some respects though, the homogenizing power of digital media—its fundamental capacity to reduce user interactions to computations—is unavoidable. Billions of people are networked together and will, as a result, produce wealths of information. One might create temporary autonomous zones through non-participation or misinformation or even hope to instill some critical understanding among users, but digital media ecologies will continue to function and adapt to such strategies. Put differently, adaptive processes of de- and re-territorialization are ongoing. Alternately, through various legal means one might establish codes restricting not only the participation of network-user populations but the ways in which social media corporations collect, analyze, and employ user data. Codes need not only take the form of laws, though, and it is possible that network-user populations could devise their own discursive codes and genres, as one finds in some self-policing user communities like Wikipedia. In short, one might try to grab ahold of DeLanda’s parameterizing knobs and turn them in a different direction.
In developing such strategies, one key element is recognizing that assemblages are not monolithic and are invariably composed of heterogeneous elements, even if they have tendencies to homogenize them. Corporations like Google and Facebook have become powerful cultural forces but only in the last decade. While there are immediate concerns related to these corporations, the deeper issue lies more generally with the role intelligent machines perform in media ecologies. In building the smart, decision-making capacities of these machines, the resulting agency created for machines can only arise alongside a deliberative capacity to take the right action. Simply put, there’s little use in creating a self-driving automobile that cannot make decisions that keep humans safe (or at least as safe as they are when other humans are behind the wheel). In describing these distributed deliberative processes, it is important to recognize that algorithms aren’t magical but rather part of a larger, heterogeneous system. As such, the solution does not lie in creating machines that work independently of humans but rather in concert with us. This presentation has described some parts of this larger system, but a more extended description is needed, one that follows experience “all the way to the end,” as Latour puts it. Though here I have focused on illuminating the role of social media agents in deliberation, any strategy for shifting the operation of distributed deliberation would need a fuller account of the other human and nonhuman participants in the ecosystem.
Ultimately such descriptions become the foundation for instaurations that seek to foster new capacities experimentally. There is no going back to some earlier media ecology. Instead, invention is required. In developing proleptic tools for deliberation in a digital media ecology, one is already acting in the dark, unable to account for all the data streaming through and unable to predict how algorithms will act or machines will learn. Though it is understandable that as humans our focus might be on our individual deliberative agency, our ability to evaluate rhetorical acts and make decisions about our individual actions, that agency is necessarily linked to larger deliberative networks. An individual user might seek means for anticipating and responding to the social media world presented to her, but those individual responses only have value in the context of a larger network, which, of course, has always been the case. As such, it is not as if we have suddenly lost our legacy capacities to deliberate upon a text or other piece of media placed before us, either as individuals or collectively through discourse; it is rather that those capacities have been displaced by the new capacities of data collection and analysis. This is Boyle’s point as well in his description of an emerging “pervasive citizenship” and rhetoric where every action we take (or at least every digitally recorded action) becomes an action of citizenship and an opportunity for persuasion. As we become assembled as a population of networked users, we become accessible by and gain access to a new sensory-media experience and our deliberative capacities in this context remain undiscovered or at least under-developed.
That said, a new materialist approach to building mechanisms for deliberation understands the task quite differently from the way, for example, that Mark Zuckerberg describes the challenge Facebook has in dealing with fake news. He articulates the problem as one of determining what is or isn’t true. However one might say that deliberation operates precisely in such contexts, where the truth cannot be finally resolved. Though one can view deliberation as an interpretive, hermeneutic process that goes in search of truth or seeks consensus around truth, as in a jury’s deliberation, a new materialist rhetoric views deliberation as an inventive, heuristic process where the measure of the resulting instauration isn’t against a standard of truth but one of significance, of its ability to create knowledge that is durable and useful. Facebook’s machines do not need to know if the content they promote is true; they need to know what it does—how it is made to act and how it makes others act (faire faire): a far more empirically achievable task one imagines than discerning truth. In making this observation, I do not mean to suggest a total disconnect between truth and agency or that there is never a need to separate truth from lies and misinformation. To the contrary, one might say new materialism connects truth to agency by investigating the actions that construct knowledge and observing the capacities that knowledge engenders.
On February 16th 2017, in an open letter to the Facebook community, Zuckerberg describes his corporation’s mission in the following way: “In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us” (“Building”). This proposed social infrastructure is also a rhetorical infrastructure and a deliberative infrastructure. I am not prepared to cede the responsibility for building the global community to Zuckerberg and the engineers at Facebook. If we want to participate in building a better ecology of distributed deliberation then we need to begin with describing the rhetorical operation of these extensive assemblages of human and nonhuman populations.