
For Policymakers: Governance Lag in the Agent Era
For Policymakers: Governance Lag in the Agent Era
You are regulating against a moving target that moves faster than you can regulate.
AI capabilities are advancing on the scale of months. Policy processes operate on the scale of years. This mismatch—governance lag—is not new, but AI makes it acute.
Agency multiplication means single actors can deploy autonomous systems at scale. Traditional regulatory approaches assumed humans in the loop. That assumption is breaking.
This is not a technology primer. It is a governance orientation.
The Core Problem
Governance lag occurs when:
- Technology capability advances
- Harms or risks from that capability emerge
- Society recognizes the harms
- Political will forms to address them
- Policy is drafted, debated, and passed
- Agencies develop implementation rules
- Enforcement mechanisms are established
- ...and by then, technology has advanced further
In slow-moving technology domains, this lag is tolerable. In fast-moving domains, you are always regulating the previous generation of problems while new ones emerge.
AI is the fastest-moving domain governance has ever faced.
What Makes AI Different
Speed of Change
AI capabilities are improving faster than any previous technology:
- Foundation models improve with each generation (annually or faster)
- Open-source proliferation happens in months after major advances
- Cost of capability drops rapidly (what required millions now requires thousands)
By the time you understand the current generation, the next generation is deployed.
Dual-Use Nature
AI is general-purpose. You cannot regulate "AI" like you regulate pharmaceuticals or nuclear materials. The same technology that threatens can also cure.
Tight restrictions on AI development may:
- Slow beneficial applications
- Push development to less regulated jurisdictions
- Advantage adversaries who do not restrict
This is not an argument against regulation. It is an argument for precision.
Distributed Development
Nuclear weapons require nation-state resources. Bioweapons require specialized facilities. AI capability is increasingly available to individuals with consumer hardware.
You cannot control AI by controlling a small number of actors. The proliferation has already happened.
Agent Autonomy
Autonomous agents act without per-action human oversight. When a human is in the loop for every action, you can assign responsibility. When thousands of agents act autonomously, responsibility diffuses.
Current liability frameworks assume identifiable human decisions. Agent systems challenge that assumption.
What Traditional Approaches Miss
Input-Based Regulation
Regulating the inputs to AI (compute, data, training) is technically feasible but has limits:
- Compute thresholds: The current approach (regulating models above certain compute thresholds) is already being gamed. Capabilities that once required large compute are being achieved with less.
- Data regulation: Useful for some purposes, but training data is not the only determinant of capability.
- Export controls: Partially effective for hardware, but software and weights are harder to control.
Input regulation buys time. It does not solve the problem.
Output-Based Regulation
Regulating what AI systems produce or do has different limits:
- Defining harms: Hard to specify in advance what outputs are harmful. "Misinformation" is not a bright line.
- Attribution: Determining whether an output came from AI or humans is increasingly difficult.
- Jurisdiction: AI systems can operate from anywhere. Outputs can flow across borders.
- Enforcement: How do you monitor and enforce at the scale AI operates?
Output regulation is necessary but insufficient.
Sector-Specific Approaches
Regulating AI in specific sectors (healthcare, finance, transportation) leverages existing frameworks. This is useful but:
- General-purpose AI does not fit neatly into sectors
- Sector regulators may lack AI expertise
- Coordination across sectors is weak
- Novel uses emerge that do not fit existing categories
Sector-specific regulation is valuable but incomplete.


