One of the major stories of the weekend has surrounded the interview with Christopher Wylie, former employer turned whistleblower of Cambridge Analytica. Here’s that interview if you haven’t seen it.
It’s good to see this story getting attention, but it’s also something we’ve basically know for a while, right? For example, here’s a NY Times op-ed from right after the election talking about how the Trump campaign used the data from Cambridge Analytica to target votes. Or you can watch this BBC news interview with Theresa Hong, who was Trump’s Digital Content Director, which is from last August. In the interview, she gives a tour of the office where she worked–in an office right next to where the Cambridge Analytica folks were working. If you watch that interview, right before the 3-minute mark she explains how people from Facebook, Google, and YouTube would come to their office and help them. They were, in her words, their “hands-on partners.” Unless she’s straight up lying about that, which would seem pretty weird in the context of this video where she otherwise gleefully recounts her role in an information warfare campaign, then it’s essentially impossible to believe that Fb didn’t know what Cambridge Analytica was doing.
The funniest part of the recent news cycle is when the newscaster turns to the expert and says “do you think this affect the outcome of the election?” Hmmmm…. do you think the Trump Campaign spent $100M+ ($85M on Fb advertising alone) in order to not affect the outcome?
So that’s the news. Now, here’s my part. Let’s be good humanists and start with a straight dose of Derrida and notion of the pharmakon. In considering the pharmacological operation of media, beginning with writing, one might investigate the cognitive effects emerging from technologies. As I’ve written about before, Mark Hansen picks up on this in Feed Forward and is really the thesis of this book:
Like writing— the originary media technology— twenty-first-century media involve the simultaneous amputation of a hitherto internal faculty (interior memory) and its supplementation by an external technology (artifactual memory). And yet, in contrast to writing and all other media technologies up to the present, twenty-first-century media— and specifically the reengineering of sensibility they facilitate— mark the culmination of a certain human mastery over media. In one sense, this constitutes the specificity of twenty-first-century media. For if twenty-first-century media open up an expanded domain of sensibility that can enhance human experience, they also impose a new form of resolutely non-prosthetic technical mediation: simply put, to access this domain of sensibility, humans must rely on technologies to perform operations to which they have absolutely no direct access whatsoever and that correlate to no already existent human faculty or capacity.
If you’re wondering what a “non-prosthetic technical mediation” might be, well one example is the underlying technical operations that drive this Cambridge Analytica story. Non-prosthetic suggests, contra McLuhan, media that are not “extensions of man” (sic).
Think of it this way (and this is a little slapdash but should get the idea across). There’s always a price to be paid to gain access to new capacities. As Hansen suggests, with writing you give up interior memory for artifactual memory. With photography and then film we extend artifactual memory into the visual but at the cost of access to unmediated experiences. Think of selfies or Don DeLillo’s “most photographed barn in America,” or Walter Benjamin’s remark that film introduces us to unconscious optics. Hansen’s 21st century media, what I would think of as networked, mobile digital media, offers a range of capacities (I won’t attempt to enumerate them) but at what cost? Basically everything. We give the network everything that we know how to give.
It strikes me that you can think of the development of media technologies as an incremental distribution of human cognition. This only works because cognition is always already relational and distributed. I.e., the biological capacity for thought emerges from an existing environmental/ecological platform–we think only because there are things to think about and with. I don’t want to go down the rabbit hole right now, but the ultimate conclusion is that we have become, are becoming, interwoven with our digital selves: politically, psychologically, affectively, cognitively and so on in just about any dimension you can imagine.
And as the saying goes, with these Cambridge Analytica revelations we aren’t seeing the beginning of the end of this story but rather the end of the beginning. We can make laws, create tools, and try to educate people, but, without falling for techno determinism, human cognitive capacities are shifting and nothing short of turning off history is going to change that. I don’t think the nature of the shift is pre-determined, but we cannot go backward and we cannot stand still.
Distributed deliberation is all over this story. It’s in the mechanisms of the apps in which FB users answered questions to figure out which Hogwarts school they belonged in (or whatever). It’s in the processes by which that data was collected and transformed into psychographic profiles. It’s in the way those profiles were organized and targeted with specific messages by the Trump campaign. It’s in how those messages were promoted and made visible to those users by social media platforms like Facebook. It’s in the way those messages were then further spread through those users’ networks of friends. Fundamentally distributed deliberation, particularly as it applies to digital media ecologies, is the way nonhuman information technologies–from bots to server farms to algorithms–participate in evaluation of information (making judgments for us) and in the making and circulating of deliberative arguments. They don’t do it alone. Humans are part of the system. Sometimes, certainly as in the case of the figures in these news stories, in very intentional ways. However that’s just dozens of people. Tens or hundreds of millions more participated by sharing data that was harvested, unbeknownst to them, millions were targeted by these messages, thousands worked for companies doing innocent jobs helping with customer service or maintaining servers, without whom none of this was possible (you could keep “thanking” people until the orchestra at the Oscars passed out).
We are not helpless in the face of this, but we do need new rhetorical practices. And that may sound hopelessly academic and disciplinarily solipsistic, but I would argue that it is rhetoric that gave us some modicum of agency over speech and writing and other media over time. And that’s what we need now: to invent new rhetorical tools and capacities.