
The Liability Vacuum: Responsibility Without Agency
The Liability Vacuum: Responsibility Without Agency
When a car crashes, someone is responsible. The driver, the manufacturer, the mechanic who failed to fix the brakes. Legal systems have centuries of precedent for assigning blame and allocating costs.
When an autonomous vehicle crashes, who is responsible?
The question seems simple. The answer is not.
How Liability Actually Works
Legal liability serves several functions:
Compensation
When harm occurs, someone must bear the cost. Liability rules determine who—the person who caused the harm, or the person who suffered it.
Deterrence
If you might be held liable, you're incentivized to prevent harm. Liability creates feedback loops that improve safety.
Moral Expression
Liability expresses social judgments about responsibility. It says: you had control, you had a duty, you failed, you must make amends.
Closure
Victims need to hold someone accountable. Liability provides a target for blame and a mechanism for resolution.
AI systems challenge all four functions.
The Distributed Causation Problem
Consider an AI system that makes a harmful decision. Who caused it?
The Training Data Contributors
The model learned from data created by thousands or millions of people. Their contributions shaped its behavior. Are they responsible?
Practically, no. The contributions are too diffuse, too indirect, and the contributors never consented to this use.
The Model Developers
The team that built the model made architectural choices that influenced behavior. But they didn't program the specific decision—it emerged from training.
They could be responsible for "creating a system that could do this," but this is different from traditional product liability. The harm wasn't a defect; it was a capability.
The Deployers
The company that deployed the AI made it available. They chose the context and constraints. But they may not have foreseen the specific failure mode.
Current law often assigns liability here—but is this just because deployers are identifiable and have money?
The Users
The person who used the AI and relied on its output made a choice. But if the AI presented confidently and the user had no way to evaluate, was the reliance unreasonable?
The AI Itself
The system made the decision. But it has no assets, no personhood, no capacity to bear liability in any meaningful sense.
Liability law assumes a chain of causation with identifiable links. AI systems create diffuse webs where causation is everywhere and nowhere.
The Foreseeability Gap
Traditional liability often requires foreseeability—you're responsible for harms you could have predicted.
AI systems routinely produce unforeseeable outputs:
Emergent behaviors: Capabilities that weren't trained for but emerged from the training process.
Edge case failures: Inputs that the system handles in unexpected ways, often without any way to predict this in advance.
Interaction effects: Multiple AI systems interacting in ways none of their developers anticipated.
Prompt-dependent behavior: The same system producing different outputs based on how it's queried.
If no one could have predicted the specific failure, who is liable? The traditional answer—no one—leaves victims uncompensated and creates no deterrence.

