As has been reported by multiple news outlets, the US military employed Palantir’s Maven smart system coupled with Anthropic’s Claude AI to identify, prioritize and strike 2,000 targets in Iran.

“AI is a tool that helps our warfighters process enormous amounts of data faster than any human could alone, and what we saw in Operation Epic Fury, over 2,000 targets struck with remarkable precision, is a testament to how these capabilities can be used responsibly and effectively,” Rep. Pat Harrigan (R-NC), who also serves on the House Armed Services Committee, told NBC News in a statement

I am completely bracketing the other and understandably more pressing political question of whether or not the US should be doing this.

My interest in this post is simply with the realization that the same frontier AI model that any of us might use is being employed to improve quite dramatically the US military’s lethal power. Presumably it will do the same, is already doing the same across the battle space, with different degrees of intensity largely based on degrees of capital investment. Only a powerful state, like the US, can make the investment to accomplish this kind of integration. Harrigan’s assertion of the “moral” value in the capacity to kill people in greater numbers with greater efficiency and at greater speeds recalls Virilio’s dromology. The fact that a school was bombed tells us something about how Harrigan defines responsible.

However, it’s a different part of all technological relationships that interests me here. Let’s call it the Chehkov’s gun part. If you build these things, they will go off. The new capacity to execute strikes on 2000 targets with incredible efficiency produces new opportunities for warfare. That’s almost tautological. As the technological capacity to perform a blitzkrieg attack shaped Nazi warfare and as autonomous lethal drones have been doing in the Ukraine, the Maven smart system now does for the US. And while Maven has been in operation for a few years, this is our first time in a war.

The plain truth today is that current frontier AI will be used by the US to conduct war. If you are not a US citizen then it is reasonable to assume this is a weapon that might be pointed against you in the future. If you are a US citizen, you have to hope that the pinky swear that the government won’t use AI against its citizens will hold. And the US won’t be the only nation with lethal planetary AI.

As we know, when we interact with these bots, we are contributing to their increased capacity. An inasmuch as they are generalizable machines (as they claim), any contribution to increased capacity is simultaneously a contribution to increased lethal capacity. And it isn’t just interactions with their products as they feel empowered to make free use of all web content regardless of copyright. As such, our lives’ digital records, such as this blog, we now contribute to the lethal capacity of nation states. That is largely unavoidable unless we abandon digital life entirely. Or if we are willing to accept our contribution to AI lethality as we accept our contribution to automobile fatalities each time we drive our car, then we can keep going along, as we do with cars. It’s certainly not impossible. In fact that is probably what we will do.

I am not here to write a blog about abandoning digital life (that old chestnut). I’m pursuing an account of the epistemic and ethical conditions of AI as they produce consequences beyond their own operation (e.g., when they enable military strikes). AI output can claim no knowledge or ethical responsibility but both knowledge and consequence are produced as the output turns a building into rubble. There are humans in (and on) various loops. We could hold them accountable as the ones who know and who bear responsibility. They might take legal ownership but they do not actually have a valid claim to articulate the knowledge in AI output.

Why can’t they make a valid claim about the knowledge in AI output?Because AI output has no epistemic qualities on which claims can be made. It does not know anything and thus cannot represent knowledge that can be read by a human.

There’s no there there. Instead the human reader produces knowledge from non-knowledge and ethics from non-ethics. Or claims to.

In short, one way of thinking about our ethical consequences in using AI is the following. When we claim to know something because we have read AI output, we are making an unethical claim. I realize that seems contradictory to experience, so I need to account for that discrepancy. I will return to this matter in more detail later. It is a central part of my book project.

I’ll just keep it in the realm of experience for now. As much as many of us, including me, have used AI and felt like we have gained knowledge, we have also experienced the way far simpler algorithms shaped our sense of self and understanding of the world through social media. Consider, for the moment, that it might be possible for AI to do something similar, but more powerful and subtle (given increased capacity), to effectively hack our cognitive processes. It’s nothing as sci-fi or obvious as mind control. Was/is social media mind control? It seems more like mind un-control, a technological realization of Foucault’s repressive hypothesis. Where social media pushed us to expressing extremes, AI’s predictive operation runs in a different direction toward homogeneity and infinite repetition.

The infinite repetition of the same is the ultimate state on non-knowledge and non-ethics. All work and no play makes Jack a dull boy. Our shining AI future.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending