If some Americans are slowly rousing to the realization that getting information via social media resulted in a distorted (and sometimes completely false) view of past election, perhaps they might be able to extend that epiphany to recognize that the distortion is ongoing and not limited to presidential politics. It is also not limited to the specific social media effects of echo chambers and fake news or even strictly social media. I will talk about these things as available means of persuasion (they persuaded you, right?). The problem now is what to do. And it is a serious issue; one might say it is a matter of national security as other nation states employ social media and fake news to destabilize our democracy. Indeed as this Washington Post story reports, the boundaries between fake news and propaganda are blurry.
Before we can move forward though we’d have to agree on a few things. The first is that we don’t want to be dupes. There are plenty of pieces popping up about how to identify fake news when you encounter it. But the primary hurdle to that is whether or not you care about the accuracy of something you’re reading or sharing. The total fabrications are probably not hard to detect. Then there are things with some truth in them (analogous to political smear–ahem, I mean ad–campaigns) but are selective and slanted. Then there’s discerning the difference between an op-ed piece and a news piece. There’s recognizing that news can only ever tell part of the story and that in necessarily choosing to report on one story another goes unreported. (I.e., we have limits). Once we get through all of that, then we can start on figuring out interpretive processes for understanding the significance of a particular news report.
Before we get too overwhelmed though, we can recognize the elements of this challenge are longstanding. Maybe you remember Jon Stewart lampooning CNN’s Crossfire? Or maybe you read Chomsky’s Manufacturing Consent from the 80s. Presumably we all know of the operation of Nazi and Soviet propaganda from the middle of the century (even if we sometimes failed to recognize it among ourselves) or the rise of advertising in the same period (you watched Mad Men, right?).And there’s the yellow journalism that dragged us into the Spanish-American War. You can go further back but once you start predating mass media the media ecological conditions are so different the comparisons are quite strained. The good news is that we already have some very good mechanisms for producing good information, not perfect information, but “good enough,” by which I mean information which, when relied upon for action, bears through. The scientific method and other research methods carried out by experts and professionals are some examples. And we have accessible strategies for critically evaluating media we encounter, as I mentioned above.
However none of that is really the problem I want to talk about. I really just wanted to point out that 1)we probably should want good information, 2) these things have happened before and 3) we already have some means for producing and evaluating good information. The obvious current problem is that not only that “good information” is mixed in with other types (faulty, maliciously misleading, playfully misleading, etc.) but also that there is an overwhelming amount of potentially good information and some of it exists in genres that are not easily understood by the citizenry as a whole. I say “potentially” good information because even if the information is well-produced it may not be the information you need for the particular question your asking. Just looking in English Studies, there’s a ton of good information about literature. Is it useful to you? Maybe.
So that’s where we get what I’m calling “distributed deliberation.” It’s a term the plays off of Edwin Hutchins “distributed cognition” (read about it in Wikipedia if you like). Basically the idea is that our capacities for cognition arise through relations with our environment, and we become capable of more and different cognitive tasks as we develop that environment. In this sense, we’ve “always already” had a distributed deliberative environment. We’ve relied on trusted friends, families, community leaders, and institutions to help us: churches and schools, even the media and government. In the contemporary media ecology, we also need to rely more directly and self-evidently upon machines. The processes (or procedural rhetorics to borrow Ian Bogost’s term) of Google, Facebook, Amazon or YouTube (just to name the four most visited sites in the US) must make choices for you; they must deliberate. Even some pseudo-random presentation of links, posts, products, or videos would be a form of deliberation, of choosing.
Certainly there are humans deeply involved in those processes. Humans write and manage the algorithms at work here, though that does not mean that they can fully predict how they will work. And sometimes when they do, some humans make calculated decisions that do not benefit the rest of us, as with Facebook’s actions related to fake news. But before we show up as Zuckerberg’s door with pitchforks, we need to understand that deliberation isn’t easy, as this Bloomberg article discusses. How do we teach machines to recognize grey areas when we have trouble doing so ourselves? Perhaps in another post I’ll write–half seriously–about the future of “artisanal information.”
So here’s the bottomline:
- We need to recognize the role of the media ecology in deliberation and that it’s not inherently bad that deliberation is distributed.
- We need to develop specific understandings of the deliberative processes of the sites on which we rely the most (e.g. when you look at your FB page or the results of a Google search, do you know why you are seeing the particular information you are seeing?)
- We need to re-articulate the specific epistemological methods–including their advantages and limits–that we will accept as legitimate constructors of knowledge.
- We need to understand the rhetorical features of the genres that participate in the construction and communication of that knowledge such that we can know when a particular piece of media has gone out of bounds.
- We need to build technologies that can allow us to reward and punish media appropriately so that our deliberative acts can shape the automated processes that build the media ecologies we experience.
But maybe before that, we need to agree on what a “fact” is, how facts are made, and how we judge whether or not a particular fact is well-made.