AI agents are the current shift in frontier AI. Much of the development has begun in coding environments, but it is not hard to imagine that by the time our current students graduate, corporations will routinely employ internal AI agents—and many individuals may use personal ones.
What can these agents do? We are still at the crest of development, so the answer is unstable. But already we see agents that can write and execute code, query databases, interact with APIs, schedule tasks, negotiate with other services, and autonomously iterate toward goals. They are not “AGI.” They do not possess general intelligence in the strong sense. But they are general-purpose in practice: flexible systems capable of operating across domains.
It will be easy to misrecognize these agents as having subjective interiority. Some may even be designed to encourage user attachment. In computer science, however, an agent is simply a mechanism that performs operations in an environment. A thermostat is an agent. It monitors conditions and acts. Actor-network theory uses the term differently, but the overlap is instructive: agents need not be human to have effects.
We have had agents for a long time.
What changes with AI agents is scale, flexibility, and opacity.
Two reinforcing concerns emerge.
First, these agents are general-purpose tools. They can be deployed in many domains—financial systems, logistics, healthcare administration, content moderation, infrastructure management. Second, although they are trained and “guardrailed,” we cannot practically know what they will do in advance, how they will attempt to achieve their goals, or what internal path led to a given output. They are powerful, partially unpredictable operators embedded in real systems.
This is not about rogue AIs escaping onto the open internet. It is about thousands of agents executing overlapping and sometimes conflicting missions—inside corporate firewalls, across platforms, and between institutions.
What does it look like when one corporate agent optimizes for cost reduction while another optimizes for regulatory compliance, and a third optimizes for user engagement? What does it look like when personal agents transact with corporate ones on our behalf?
Harm will occur.
Not necessarily because anyone intended it. Not necessarily because a crime was committed. But because distributed, semi-autonomous systems interacting at scale will generate outcomes no single actor foresaw.
When harm occurs in such a system, existing legal mechanisms will attempt to assign responsibility: negligence, product liability, failure of oversight. Courts will adapt. Legislatures will adapt.
But there is another conceptual gap worth addressing.
What happens when an AI agent causes harm while operating within its design parameters, under reasonable oversight, without identifiable human fault?
Suppose an autonomous vehicle’s hardware does not fail. The company followed regulatory guidance. The human safety driver was attentive. Yet a model-level decision—consistent with its training and internal optimization—results in a fatal collision.
No crime. No negligence. But a death.
Our legal system is built to assign responsibility to persons or corporations. But there is a space between subject and object that becomes visible in cases like this.
Jane Bennett, in Vibrant Matter, recalls the medieval legal concept of the deodand. From the thirteenth century into the nineteenth, when a nonhuman object caused a person’s death—a cartwheel, a beam, a horse—the object could be forfeited to the Crown. Etymologically, it was “given to God.”
The deodand did not grant the object moral agency. It did not declare the cartwheel guilty. It created a legal mechanism to acknowledge material efficacy without attributing personhood. The object had causal force; therefore, it could not remain untouched by consequence.
I am not suggesting we revive medieval forfeiture law. But the deodand reveals something conceptually useful: a juridical space between object and subject.
AI agents occupy that space.
They do not possess moral responsibility. They do not have rights. They do not endure consequences. But they operate with real causal power.
If we confine ourselves to asking “who is to blame?” we risk missing another question: how do we register and respond to the causal efficacy of systems that are neither mere tools nor moral agents?
One possibility—call it a deodatum—would not treat the AI as a person, but would treat certain models or agent configurations as entities subject to material consequence when their operations produce serious harm.
What might that look like?
If a specific model architecture is causally implicated in lethal operation, perhaps that model must be retired globally. Not patched and redeployed, but sacrificed. Or its capabilities must be permanently restricted. Or it must be publicly archived and barred from further commercial use.
Such measures would not replace civil liability or criminal prosecution. They would supplement them.
The point would not be punishment in a moral sense. It would be structural acknowledgment. A recognition that certain configurations of artificial agency, once implicated in harm, cannot simply continue as though nothing happened.
Without such a mechanism, harm risks dissolving into abstraction. “The system failed.” “An error occurred.” The passive voice absorbs consequence.
A deodand-like approach insists that when distributed artificial agency produces irreversible damage, something material in that assemblage must be withdrawn, altered, or forfeited.
This would not anthropomorphize AI. It would not grant it rights. It would not relieve corporations or designers of responsibility. It would, however, prevent us from pretending that these agents are just neutral instruments.
As AI agents proliferate—inside corporations, across networks, within personal devices—we will face harms that are not reducible to simple blame. The medieval jurists who invented the deodand were grappling with distributed causation in a premodern world.
We may need an updated vocabulary for ours.
Before thousands of agents begin acting on our behalf, and sometimes against our interests, we should ask: what would it mean to hold a system accountable without pretending it is a person?
And what would we be willing to sacrifice when it fails?



Leave a comment