The kairotic, exigent argument for AI is that we need to act at scale. The problems are too large, urgent, and networked. Climate, infrastructure, data governance, global health, automation: all are increasingly framed as demanding planetary computational systems, fast decisions, and coordinated action.
Undoubtedly these issues are staged as problems that require solutions at scale. Acting at scale certainly does not require acting ethically; no such requirements exist. However large-scale action is incapable of ethics in the full sense. Ethics requires situated responsibility, exposure to consequence, and the possibility of refusal. Scale requires abstraction and aggregation. When we conflate the two, ethics becomes a mode of pre-emptive legitimation through compliance rather than an exposure to the consequences that action at scale visits upon individuals.
AI intensifies this pretense. Much of what now passes as “AI ethics” is really a discourse of ethical efficiency: the promise that ethical outcomes can be produced automatically, in advance, without hesitation, without conflict, and without the messiness of ethical life. Ethics is treated as something that can operate without action—without decision, risk, or responsibility. Once ethics is framed this way, the question of cost disappears systemically.
But the cost itself does not disappear. It is simply externalized.
So the ethical questions return, whether we want them to or not: Who pays for action at scale? Who bears the harm that is rendered acceptable in advance? And who gets to decide that the action was necessary in the first place?
Universities often answer these questions by invoking “society.” This change is good for society. This system benefits society as a whole. But this move is precisely where ethics is foreclosed, particularly in the context of AI ethics. The operation of AI ethics requires that disagreement, exposure, and loss be smoothed or circumvented via anticipation rather than endured. AI ethics increasingly function as a planetary computational system, assuming a de facto global ethics. It is global and it presumes to act ethically.
From a humanistic perspective, there cannot be a global ethics. There are global systems, but there is no global ethical subject. Ethics does not scale cleanly without becoming governance. When ethics is made global, it ceases to be ethical in any meaningful sense and becomes a justification mechanism.
The humanities cannot honestly share in the university’s mythology that society can be improved through scalable, ethical design. That mythology requires ethical closure. It requires that the meaning of the good be decided in advance. Such missions, as markers of 20th-century universities, were signals rather than operations. But with AI ‘ethical’ governance becoming not only a subject of study but a mode of university operation. As a result, the epistemological status of university missions alters and becomes toxic to the ethical openness required for the circulation of meaning and difference as those missions are operationalized as forms of ethical governance.
The humanities work in the opposite direction. They insist on the circulation of meaning rather than its foreclosure. They keep open the questions that scale demands be closed. They refuse to let harm be absorbed into aggregate benefit. In this sense, the humanities are not opposed to change. Change happens regardless. What they resist is the ethical valuation of change at a scale that cannot remain answerable to those it affects.
Jean-Luc Nancy once warned against the belief that being operative is sufficient—that operation alone can ground ethical or ontological legitimacy. AI makes that belief newly tempting. Faster systems, better predictions, preemptive decisions: all of this promises a future in which ethics is finally efficient. But efficiency is not ethics. And pretending otherwise does not eliminate ethical violence; it only hides it.




Leave a comment