We do things. It’s an interestingly Latourian idiomatic expression, a kind of dancer and the dance moment. And in the moment of that linguistic confusion, we become those things: consumers, workers, believers, lovers, and so on. Not in an permanent sense though, always moving from one thing we do to another. One of the things we do, increasingly and often without much thought, is interact with algorithms. Sadly there’s no convenient “-er word” for that, but it is a thing we do nonetheless.
In a recent Atlantic article Adrienne Lafrance reports that “Not even the people who write algorithms really know how they work.” What does she meant by that? Basically that no one can tell you exactly why you get the particular results that you get from a Google search or why Facebook shows you one set of status updates rather than another. (I notice I get a very different set of updates on my phone Fb app than I do from my web browser.) And of course that goes on and on, into the ads that show up on the websites you visit, the recommendations made to you by Amazon or Netflix and other such sites, etc.
It would be easy enough to call this capitalism at work. No doubt these corporations are making money from these algorithmic operations (and if they didn’t then they’d change them). But that doesn’t mean they understand how they work either. It would also be understandable if one responded with a degree of concern, maybe even paranoia, over the role these inscrutable machines play in forming our social identities. Lafrance points, for example, to the role their data collection might play in decisions about loan applications and such. However at this point, I’d want to recall Ian Bogost’s earlier Atlantic article about our tendency to overvalue, whether it is demonize or deify, the power of algorithms.
That said, it might be useful to put this conversation in the context of Robert Reich’s New York Times editorial where he argues that “Big tech has become way too powerful.” As he observes
Now information and ideas are the most valuable forms of property. Most of the cost of producing it goes into discovering it or making the first copy. After that, the additional production cost is often zero. Such “intellectual property” is the key building block of the new economy. Without government decisions over what it is, and who can own it and on what terms, the new economy could not exist.
But as has happened before with other forms of property, the most politically influential owners of the new property are doing their utmost to increase their profits by creating monopolies that must eventually be broken up.
Certainly algorithms are among the most valuable of those information ideas. It’s another thing we might add to the complex network that makes up an algorithm. Not just code and date, but also servers, data networks, server farms, electricity, programmers, technicians and other human workers. AND they are also legal entities, created by intellectual property law. Reich’s point is that when we argue between government control and free markets we miss the point. Without government there cannot be a free market. In a fairly obvious example, without police, courts, and jails, how would the concept of property function? Reich suggests that we may want to structure the market differently in relation to these patents if we want to protect ourself against this growing monopoly.
In the specific context of algorithms one might wonder about the social-cultural value of proliferating them, and thus of shifting the rules of the market to encourage proliferation. It may not be possible, as Lafrance decides, to exert intentional control over what algorithms will do. This makes sense because I can’t predetermine what an algorithm will show me without already knowing what it is possible for it to find, which of course, I cannot. However it is possible to have a variety of algorithms showing us many different slices of the informational world, a variety that would not shift the overall role of algorithms in our lives but would downplay the influence of an increasingly limited algorithm monopoly.
In rhetoric, we often talk about discourse communities, that is, communities that are formed through texts and textual practices. The discourse communities in which we participate, as we typically say, shape our identity. These communities might conform to our family, where we grew up, our gender and ethnicity, later our professions, our religious beliefs and politics, etc. etc. Our digital discourse communities, filtered through social media and search engines, are mediated by algorithms. It would be too much to say they are determined by algorithms, but those calculations are a significant shaping force. If, particularly in rhetorical discursive terms, we are readers and writers, then we are the things we read and the things we write in response to what we read and the community we encounter through reading. In the digital world, these are algorithmic productions.
Algorithms are objects persisting in and dependent upon an information-media ecology that is not simply digital but is also material and economic, legal, and living (i.e. it involves humans and is generally part of the Earth). Humans (some) write algorithms, and humans (many) interact with them. We make laws about them, and we try to control them. But we cannot fully understand them or what they do or why they do what they do (or even if asking why makes sense). They are our effort to interact with a mediascape that is vaster and faster than our ability to understand, despite our role in its creation.
How can there be a contemporary rhetoric, even one that wants to focus solely on human symbolic acts, that is not significantly algorithmic?