Ezra Klein has an interview with Naomi Klein today regarding her new book, Doppelgänger: A Trip Into the Mirror World. It’s a wide-ranging conversation. Klein’s mirror-world has part to do with her being confused online with Naomi Wolf (90’s feminist turned conspiracy theorist podcaster). But that’s just one interesting story of mirroring that’s part of a larger theme for Klein.

Here I am interesting how the topic of America as fascist and the rise of AI mirror one another. In this way, the interview illuminates connections between fascism and technology that are often elided in our understandable focus on issues of eugenics and race.

In the interview, fascism is partly understood as a response to an “injury to story:” the story of the self, the nation, etc. Two injuries are discussed. The first is the injury to the ruling elite as they begin to find themselves held accountable in new ways. The podcast discusses “Me Too” but I would also think of the response to the 2008 recession. Climate regulation is mentioned. The other injury story regards the masses. As Ezra puts it, for those Americans who feel this injury, “they are losing the story they’re a part of — the story of their own history and how they are the good guys in history — certainly not a checkered history. The story of their nation and how great their nation is and what its destiny is.” This injury story is familiar to us in the MAGA movement.

[Just a side note that this argument is a little problematic to me, or at least underdeveloped in the podcast. Surely the left also creates identity arguments of injuries, most notably slavery. In fact, the Judeo-Christian tradition is an injury story, or at least Nietzsche would make that argument. So if we were somehow to get beyond stories of injury/justice… well let’s just say that would be something.]

I will just follow Virilio on this. “It was only a skip and a jump from social Darwinism to biological cybernetics. The jump was easily taken in the Second World War, by the very people who victoriously opposed the biocracy of a National Socialist State that based its political legitimacy on the utopia of a redemptive eugenics. Total mobilization and motorization have always been two sides of the same coin in the race for biological and technological supremacy” (The Art of the Motor 133). This is 1993, in the wake of the collapse of the Soviet Union. But it is also the America of the first Gulf War. A war we are fighting once again/still. So I don’t think we can just point to a decade or so ago, or even to the Tea Party or Birtherism or whatever for the kind of diagnosis that’s being made here. I would go back to Whitney Houston’s 1991 Super Bowl performance of the national anthem, to see the immediate emergence of a newly militarized, nationalistic, and often jingoistic America. [This again was during the Gulf War.]

But we can go back to the 1956 Dartmouth AI event, which no one would term fascist, and yet was a direct effort at technological supremacy through “total mobilization and motorization.”

In short, we’ve been at this for a while now.

As promised above, the other Klein and Klein topic I wanted to mention was AI. They come at the topic through a conversation on data centers. Ezra poses a question about understanding our purpose as humans in an AI world. Naomi responds with a social political observation: “I don’t think people have the capacity to think about what their lives are for if A.I. is replacing their jobs. Because they’re worried about how they’re going to eat and pay their rent. They have absolutely no indication that they live in a society that cares at all about that question.”

I am sure people are worried about food, clothing, and shelter. I think it is fair to say that most people’s lives are dominated by concerns for survival, though how those concerns manifest shifts. I want to define survival in relation to their first topic. As such, let’s define survival as “continuing to live the stories of our lives as we tell them.” An injury to that story is a threat to survival and a potential source of fascist thought.

And yet, we might be coming to a point where AI begins to threaten identity. As Naomi Klein says here,

It’s really scary that the merger of Silicon Valley with the Trump administration means that these devices and these platforms that sold themselves as our liberation — first we found out that they were tracking us to advertise to us, but now we find out that they have integrated with the Trump administration in all kinds of ways that we don’t fully understand, in terms of what data was taken through DOGE, what Palantir is doing.

But what is emerging in real time is that there are profiles of us, and A.I. is superpowering this.

Many technocrats might imagine there is a “good” version of this. However, as I am more in line with Virilio, I tend to see these stories as adjacent. The idea that the bad/evil can be purified either technologically or biologically (if there is still a difference) only leads to authoritarianism or fascism.

As Virilio gives us a start, the remainder of the counter narrative emerges. We can tell the story of AI as technofascism, of AI itself as a fascist, identitarian mythology of mastery. That AI myth begins with the presumption that “we” (whoever the we are) represent “intelligence” such that we can securely produce an artificial one. This desire for the parthenogenic emergence of thought, without interaction with human community, echoes the fundamental fascist call for purity. All information is consumed but only one voice speaks as the author-less author who becomes anointed as an empirical, super-human intellect rather than a non-entity.

It isn’t possible to create a “good” AI anymore than it is possible to create a good hammer or a good person for that matter. That doesn’t mean that we just ignore the effects of hammers, people, or AI. Sometimes hammers, people, or AI might be good or at least good for something, someone, etc. That’s a useful observation because then we can ask about the other times and places where they aren’t “good,” and we can discuss how we will address the negative consequences and externalities of “doing good.”

And let’s not ask an AI how to do that.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending