Perhaps this is a departure from concerns of distributed deliberation, fake news and such. Perhaps not. Here though I begin with the rhetoric of an emerging sub-genre regarding humanity’s slow, dismal apocalypse in the wake of intelligent machines. I offer two examples, one from the New Yorker, “Silicon Valley Has an Empathy Problem” by Om Malik and the other from The Atlantic, “Watching the World Rot at Europe’s Largest Tech Conference” by Sam Kriss. Viewers of HBO’s WestWorld (like myself) should have a good sense of the tone of such work but to offer a taste.
when you are a data-driven oligarchy like Facebook, Google, Amazon, or Uber, you can’t really wash your hands of the impact of your algorithms and your ability to shape popular sentiment in our society. We are not just talking about the ability to influence voters with fake news. If you are Amazon, you have to acknowledge that you are slowly corroding the retail sector, which employs many people in this country. If you are Airbnb, no matter how well-meaning your focus on delighting travelers, you are also going to affect hotel-industry employment.
In the accounts given by philosophers like Bernard Stiegler, the human stands on the point of vanishing entirely; we become something incidental to a total technological system. As he points out, a human being without any technological prostheses is nothing, an unsteady sac of flesh defined only by what it doesn’t have: no shelter, no protection, no society. We create tools, but technical apparatuses and their milieus advance according to their own logic, and these non-living objects have their own strange form of life. Our brains developed to control our hands; human consciousness itself was only the by-product of a technical evolution that moved from flint-knapping to the hammer to the virtual bartender; its real job isn’t to perform any particular task but to perpetuate itself. “Robots,” he writes, are “seemingly designed no longer to free humanity from work but to consign it either to poverty or stress.” Whatever illusion of predominance we had is fading: For others, like Benjamin Bratton, the real political subject is no longer a human individual but a “user,” which can be any kind of biological or digital assemblage. With production automated according to algorithmically generated targets, with the vast majority of all written language taking the form of spam and junk code, this system has less and less use for us—even as a moving part—with every passing day.
Web Summit is where humanity rushes towards its extinction.
There’s some perhaps gallows humor in the notion that the qualities that are supposed to separate us from machines like intelligence, empathy, and ethics are qualities we so often fail to display. But let’s press on. One point shared by Malik and Kriss that connects with the election is the way technological developments are making human labor obsolete. In many respects, this is not a new story. Think of the story of John Henry. Indeed many have argued that we are entering a historical period when we will no longer be able to define ourselves by the work we do or place so much moral value on labor. Read this article for example, where James Livingston suggests that we “Fuck Work.” Or you might consider these articles on the 538 Blog and in The New Yorker on the idea of having universal basic income (i.e. where every citizen gets sent a check each month). Regardless of whether or not you think basic income is a good solution, the problem it seeks to address is plain. We may not need people to work as much as we once did. More poignantly, we don’t require people to do the same kinds of work.
In another example Malik offers
Otto, a Bay Area startup that was recently acquired by Uber, wants to automate trucking—and recently wrapped up a hundred-and-twenty-mile driverless delivery of fifty thousand cans of beer between Fort Collins and Colorado Springs. From a technological standpoint it was a jaw-dropping achievement, accompanied by predictions of improved highway safety. From the point of view of a truck driver with a mortgage and a kid in college, it was a devastating “oh, shit” moment. That one technical breakthrough puts nearly two million long-haul trucking jobs at risk.
One of this year’s primary election narratives is of Trump supporters’ hope that he will keep his promise to bring back their lost factory jobs. Those lost jobs are blamed on trade agreements, which leads to other kinds of political-ideological affects, but many have been lost to technological change. Read about it in The Economist, The Washington Post, and Fortune. Citing this report, the Fortune article notes that only about 13% of lost manufacturing jobs are a result of trade agreements. The rest are a result of domestic shifts, primarily the increased productivity per worker of factories as a result of automation (i.e. robots). However, if the truck driver didn’t need the truck to have a home or pay college tuition, then would s/he care about the robot getting behind the wheel? One might believe that an idea like basic income is far too socialist for American tastes, and that’s likely right. But government intervention in technological development to ensure that truck drivers or factory workers aren’t replaced by robots is really no less socialist in the end, right? The only other option is training/education for those displaced workers (and hopefully not just into another industry that is soon to be automated).
So this is where I depart from the spirit/tone of Malik and Kriss’ articles. There’s no doubt that they are describing a real problem. And do we need to think more carefully and empathically about the ethical implications of technological developments. Sure. I mean, who’s going to say “no we don’t” to such a proposition? When I hear such conversations, I think about the Italian Futurists fascist aesthetics or Walter Benjamin’s angel of history. There are many challenges and pitfalls here. However, I also think of how Fredric Jameson defined modernity in Postmodernism as “the way ‘modern’ people feel about themselves: the word would seem to have something to do not with the products (either cultural or industrial) but with the producers and the consumers, and how they feel either producing the products or living among them. This modern feeling now seems to consist in the conviction that we ourselves are somehow new, that a new age is beginning… we have to be somehow absolutely, radically modern; which is to say (presumably) that we have to make ourselves modern, too; it’s something we do, not merely something that happens to us” (310). Writing in the eighties and early nineties, Jameson suggested that we no longer felt this modern feeling in the postmodern era. But now I think we might once again. It’s fair enough to point to the insensitivity of developers, tech designers, Silicon Valley investors, and so on. They have a part in this. So too do the engineers, computer scientists, and corporate executives automating one industry after another. But the largest burden falls on all of us to build new subjective relations among these new systems. The question should not be what use the system has for us but rather what use we have for the system. Perhaps like the moderns of a century ago who met the challenges of the second industrial revolution, we have our own technocultural challenges to face.
We can see robots as mechanical, as incapable of empathy or ethical choices beyond those their programming forces them to obey, but if we view empathy and ethics as emergent network effects, then we can recognize that nonhumans have always participated in our capacities for empathic, ethical actions. Moving human labor out of the factory can be an empathic and ethical act enabled by robots as long as the humans affected are not relegated to “poverty or stress” as Stiegler and Kriss would suggest. That’s the political problem that needs solving.