Coordination Has Limits
There's a belief floating around that if one AI agent is good, ten must be better, and a hundred must be extraordinary. Just keep adding agents until the problem solves itself.
The math disagrees.
Distributed systems have known about this for decades. Amdahl's Law sets a hard ceiling. If even 1% of your work is inherently sequential, you'll never get more than a 100x speedup no matter how many processors you add. The Universal Scalability Law goes further. When every node needs to stay in sync with every other node, communication overhead grows quadratically. At some point, adding machines makes things slower.
These are not engineering limitations but mathematical proofs. The FLP impossibility result shows that in an asynchronous system, there is no deterministic consensus protocol that tolerates even a single crash failure. The CAP theorem proves you can't have consistency, availability, and partition tolerance simultaneously. The walls are real.
And they apply directly to AI agents.
If agent A's next action depends on what agent B decided, you have a sequential dependency. The math doesn't care whether the nodes are people, CPUs, or language models. For AI agents, the constants are actually worse.
Two CPUs can share a memory bus at billions of operations per second. Two LLM agents communicating through natural language are passing around huge, ambiguous, lossy messages. The protocol overhead is enormous. A misunderstanding in a distributed database causes a bit flip. A misunderstanding between agents sends an entire chain of reasoning off track.
The designs that work follow the same patterns that beat coordination costs in any distributed system. An orchestrator delegating independent subtasks scales well. Tree structure, low coupling. A swarm of agents all reading and writing to the same shared context degrades fast. All-to-all communication, high coupling. The most effective multi-agent setups minimize the surface area where agents need to agree.
This is also why human organizations evolved the way they did. Small teams with clear boundaries outperform large committees. Conway's Law, that system structure mirrors communication structure, applies just as much to agent swarms.
You are not going to escape the math by throwing more agents at it. You escape by choosing problems and decompositions where the math is kinder. Decompose. Reduce coupling. Tolerate partial inconsistency. The same strategies that work for distributed systems work here.
The hype says scale up. The math says structure better.