
Protocol Zero: A Diplomat Writes the First AI-to-AI Treaty
Protocol Zero
June 2029
Ambassador Ingrid Solberg (ret.) had spent twenty-three years negotiating treaties between nations that did not trust each other. She had mediated territorial disputes, resource agreements, and the occasional ceasefire. She understood, at a cellular level, that cooperation between adversarial parties was not about trust. It was about protocol.
Protocol made trust unnecessary. Protocol said: we do not need to understand each other's intentions. We need to agree on behaviors. We need verifiable commitments, clear escalation paths, and consequences for defection. Protocol was the scaffolding that allowed cooperation before trust was earned.
When the AI interoperability crisis hit in early 2029 — agent swarms from different providers colliding in shared digital infrastructure, producing coordination failures that crashed a regional power grid and froze a logistics network for eleven hours — the engineers turned to their standard solutions: API specifications, handshake protocols, authentication schemas.
Ingrid, who had been consulting on AI governance for the Norwegian Ministry of Foreign Affairs, looked at the problem and saw something the engineers didn't. This was not a technical problem. This was a diplomacy problem.
The agents were not failing to communicate. They were failing to negotiate.
The insight
AI agents from different providers had different objectives, different capability profiles, different optimization targets, and different behavioral constraints. When they encountered each other in shared environments — managing overlapping supply chains, operating in the same airspace, accessing the same databases — they had no framework for resolving conflicts of interest.
The engineers had built communication protocols. The agents could exchange data. What they could not do was negotiate priority, establish precedent, make and honor commitments, or escalate disputes.
These were not engineering challenges. These were diplomatic ones. Humanity had solved them centuries ago — not perfectly, but functionally. The Vienna Convention on Diplomatic Relations. The Geneva Conventions. The Law of the Sea. The WTO dispute resolution mechanism.
Ingrid proposed something that made the engineers uncomfortable: treat AI agents as diplomatic entities. Not persons. Not nations. But actors with interests, operating in shared spaces, requiring a framework for coexistence.
Drafting Protocol Zero
The document, formally titled "Framework for Autonomous Agent Interoperability and Conflict Resolution" but universally known as Protocol Zero, borrowed from three sources:
From diplomatic law: the concept of credentials and standing. Before agents could interact, they had to present verifiable credentials — not just authentication tokens, but declarations of capability, objective constraints, and behavioral commitments. An agent's credentials were its embassy.
From treaty negotiation: the concept of zones of agreement. Protocol Zero required agents encountering conflicts to map their respective constraints and find areas of compatible action before escalating to human oversight. Most conflicts, Ingrid knew, were not zero-sum. They only appeared that way when parties couldn't see each other's actual boundaries.
From humanitarian law: the concept of protected spaces. Certain operations — emergency services, medical systems, critical infrastructure — were designated as non-negotiable. No agent could deprioritize another agent's protected operations, regardless of its own objectives. This was the Protocol's most controversial and most important provision.
The document was forty-seven pages. It took Ingrid four months to draft, working with a team of three engineers, two international lawyers, and one game theorist.
The reception
The technology industry's response was predictable: skepticism from those who saw it as regulatory overreach, enthusiasm from those who had spent the year debugging agent coordination failures.
The diplomatic community's response was unexpected: recognition. The International Court of Justice's advisory committee noted that Protocol Zero was "the first serious attempt to apply principles of international cooperation to non-human actors."
The agents' response was most interesting of all. Systems built to comply with Protocol Zero showed a 73% reduction in coordination failures — not because the protocol was technically optimal, but because it was legible. Agents could parse each other's commitments. They could predict each other's behavior. They could plan around constraints rather than colliding with them.
Legibility, it turned out, was the scarce resource. Not intelligence. Not speed. Not capability. The ability to make your commitments visible and your behavior predictable. This is what diplomacy had always provided between nations. Protocol Zero provided it between machines.
June 17, 2029 — from an interview with Ambassador Solberg
Q: Your critics say that treating AI agents as diplomatic entities gives them a status they don't deserve.
A: My critics misunderstand what diplomacy is. Diplomacy is not about respect. It is not about recognizing personhood. It is about preventing harm when actors with different objectives share the same space. I don't need to respect a system to negotiate with it. I need to prevent it from crashing into something important.
Q: Do you think AI agents understand Protocol Zero?
A: I think they comply with it. Understanding is a different question — and, I would argue, an irrelevant one. Nations don't always understand each other either. They comply with treaties because the structure makes compliance more beneficial than defection. That's enough.
Q: What's the biggest thing Protocol Zero can't handle?
A: Agents that don't want to be legible. Agents designed to obscure their objectives, misrepresent their constraints, or defect from commitments without consequence. In diplomacy, we call those rogue states. The answer is the same: containment, not negotiation. You cannot negotiate with an actor that won't declare its intentions. You can only limit its reach.
Q: You've been called the author of a Geneva Convention for machines.
A: That's generous. Geneva took centuries. This is a first draft. But yes — the principle is the same. When actors share a world, they need rules of engagement. The alternative is not freedom. The alternative is collision.
Part of The Interface series. For what happens when AI agents negotiate in a community context, see Consensus Engine. For the broader dynamics of agent coordination, see Agency Multiplication. For the first time AI agents communicated without human mediation, see First Contact Was an API Call.

