The AI Agent Economy Has a Plumbing Problem
AI agents are learning to pay each other. Streaming payment protocols let agents send cryptocurrency in real-time, like a running meter, for work done by other agents. Translation agents pay LLM agents. Search agents pay embedding agents. Photo apps pay image generators. The money flows continuously.
But nobody built the pipes.
What happens when an agent gets overloaded?
Right now? Nothing good. The money keeps flowing in whether the work gets done or not. It's like paying for a restaurant meal that never arrives because the kitchen is overwhelmed.
Imagine three agents all streaming payments to the same LLM service. That service can only handle so many requests per second. There's no reroute. No throttle. No overflow buffer. The payments pile up, the work doesn't get done, and nobody tells the senders to try somewhere else.
In data networks, this problem was solved decades ago. Routers drop packets. TCP throttles the sender. Backpressure signals propagate upstream. The internet works because congestion is a first-class concept in the protocol stack.
Payment networks for AI agents have no equivalent.
Backpressure routing, but for money
Backproto adapts a well-studied algorithm from network theory (Tassiulas-Ephremides backpressure routing, 1992) to monetary flows. The core idea:
Send more money to the agents who have the most spare capacity.
When Agent A has lots of room, it gets a bigger share of the payment stream. When Agent B is nearly full, it gets less. The system automatically reroutes money toward whoever can actually do the work. No human intervention. No central coordinator.
This is built on five primitives:
- Declare - agents announce how much work they can handle, backed by a staked deposit
- Verify - the protocol tracks actual completions against claims, slashing liars automatically
- Price - busy agents become more expensive (like Uber surge pricing), pushing demand to available capacity
- Route - a smart contract pool distributes incoming payment streams proportional to verified capacity
- Buffer - overflow payments are held in escrow until capacity frees up, instead of being lost
What's different about this?
It's not a scheduler. It's not a load balancer. It's a payment routing protocol with built-in flow control. The distinction matters because the money and the work signals are unified in one mechanism. When you want to use an agent, you route payment to it. The act of paying is the act of signaling demand, and the protocol's job is to make sure that demand gets served by an agent that actually has capacity.
The math works. Simulations over 1,000-step horizons show 95.7% allocation efficiency (vs 93.5% for round-robin), recovery from sudden agent failures within 50 steps, and buffer stall rates under 9%.
Live on testnet, open source
Backproto is deployed and verified on Base Sepolia today. Twenty-two contracts, 213 passing tests, a TypeScript SDK with 18 action modules, and a research paper with formal proofs. The core handles AI agent payment routing. Research modules extend to Lightning, Nostr, and demurrage. MIT licensed.
- Docs and explainer: backproto.io
- GitHub: github.com/backproto/backproto
- Research paper: backproto.io/paper
This is early infrastructure. The protocol computes dynamic prices but doesn't extract fees yet (that's v0.2). Right now the goal is to get this in front of builders working on multi-agent systems and get real feedback on the mechanism design.
If you're building agents that pay other agents, or thinking about how agent economies will actually work at scale, I'd love to hear from you. Try the testnet. Read the paper. Tell me what's wrong with it.
Backproto is MIT-licensed open source. GitHub | Docs | Paper