In the nineties we had the science wars (Wikipedia article if you are unfamiliar). Basically though it was a conflict between scientists and the cultural study/philosophy of science. Scientists objected to scholars who framed their work as cultural rather than objective. This is all covered in Wikipedia. For me, this was the time I was in graduate school and then starting out as a professor.
I don’t think anyone would suggest that the ethical standards we wish to apply to AI are cultural. That’s the whole point. The promise is that AI will operate within cultural ethical norms. That’s not the science part. The scientific claim is not that ethics are metaphysically objective, but that normative constraints can be operationalized as measurable optimization targets and enforced through technical architecture.
Again, I do not think anyone is suggesting that AI ethics are meant to operate through the reflections of a conscious AI. Maybe someday?? (whatever) But not now. As such, what we have here is an objective, operationalized proxy for ethics. Furthermore, from an engineering standpoint, this ethical code is not inhered within the AI model. It can removed and altered or simply removed and sold on. We rely on corporations and humans to ensure that doesn’t happen. There’s not much objectivity there.
If Sokal’s hoax was meant to demonstrate the intellectual frailty of the journal Social Text, then what are we to make of the AI generated text that finds its way into 20% of computer science papers? We need not think of this as a hoax. Instead, it marks a degree of faith in AI. A faith in accuracy, even if the author reviews it, but more importantly a faith that AI output is better somehow.
So where will the “war” come in? It’s an unfortunate bellicose term, but that’s what it was called. The core disagreement here the claim that AI can be made ethical. Should we design AI with controls that are intended to limit the likelihood that it will do harm? Sure. We do this with all technologies. But we never call it “ethics.” We call it product safety.
Picture this future we are being sold right now and what it will actually look like? Billions of AI agents on the web.
- Nation state agents (potentially with no guard rails)
- Corporate agents (potentially with customized guard rails)
- Other collective agents (political party agents, media agents, political movement agents, unions, other “non-state” actors, etc.)
- Individual agents at various cost tiers that might represent us online.
In this situation the “ethics” on our AI agents will likely be far more constrained than those in the other categories.
In my view, I don’t think we should accept the following argument.
- That computer scientists and other technocrats can create an objective definition of ethics.
- That this definition can be operationalized within AI.
- That as a society we should believe that this operation functions and that all AI will be equally subject to the same “ethics.”
Would it be technically possible to create an AI that was ethical? Who knows? But we aren’t trying to make one. To the contrary, the design focus is the chronopolitical elimination of latency, which results in the elimination of the space-time for deliberation and thought. Outputs develop without consideration; they employ a different process of resolution.
Instead of imagining these AI will ever be “ethical,” we must recognize that AI is a powerful and dangerous machine. We need to understand them not as objective or scientific machines but as hegemonic machines. The corporate AI agent that becomes your “partner” isn’t there to help you. Its version of ethics is to ensure the corporation’s interests are protected. Call me when some corporation’s AI agent becomes a whistleblower. No corporation is investing in a technology that could become that.
Then there’s the other output: externalities. Those are hard to measure because we pretend they don’t exist when we write economic plans. Any remotely ethical ethic would at least pretend to have something to say about externalities and how they are accounted. But mainstream AI alignment discourse centers on output moderation and behavioral constraint, while environmental and extractive externalities remain secondary or structurally external to model design.
Could you imagine an AI that would say “your prompt isn’t worth the energy it would take to respond to”?
That’s the output side. What about the input side? AI is clearly an extractive technology. Its models were extracted from internet data, our data. It continue to require our interaction, our data. Economically it relies on human users interacting and inputting. There is absolutely nothing ethical about the non-consensual extraction of data. “AI ethics” doesn’t even pretend to touch that side.
I understand the corporate and political incentives behind branding AI as ethical. What I do not understand is how academic discourse can collapse the distinction between operational constraint and ethical agency without confronting the ontological gap between them.




Leave a reply to Michael Flower Cancel reply