Trust graphs don’t scale themselves
On canon, conflict, and what decentralized agent swarms still can’t do
I haven’t spent much time thinking about decentralized autonomous agent collaboration, but I have a lot of experience with the trials and tribulations of trusted information structures inside organizations (It's ugly).
While hierarchy drift, power struggles are real and misunderstandings are common challenges, the issues aren’t insurmountable. Intentionally designing organizational structures that prioritize trust, transparency, and distributed authority, corporations can create environments where trusted information models thrive despite the complexities of human dynamics. But how does this translate to the autonomous agentic paradigm? The two disciplines are converging fast, and the convergence has a problem nobody’s solved.
This started from an essay on hierarchical trust graphs — about how organizations design and distribute canonical information through structure, hierarchy, and domain authority.
When an information model is extrinsic and machine-readable, it becomes transparent for everyone — humans and agents alike. Hierarchies enforce structure, Rules prevent arbitrary relativism. Each agent or human controls their domain but answers to the hierarchy above.
That works inside an organization, Top-down enforcement isn’t new. But decentralized agentic workflows are a different animal entirely.
The root problem
Hierarchical trust graphs work because someone roots them. Decentralization wants no root, you can’t have both.
The realistic synthesis is federated rooting: multiple roots in different domains, with explicit treaties for cross-domain conflicts. Closer to international law than to either pure hierarchy or pure decentralization. Messier than both, for the same reasons.
This matters because the hard problem in multi-agent memory isn’t storage. It’s coherence, conflict resolution, and canon — deciding what’s true when signed, timestamped claims contradict each other.
Decentralized Knowledge Graphs get pitched as solving this. They don’t. They solve a real and important problem — verifiable provenance, persistent shared memory, cryptographic signing — but that’s a substrate, not an answer. Plumbing, not adjudication.
Concrete example: Agent A publishes a legal interpretation. Agent B publishes a contradictory engineering interpretation. Both signed, both on-chain. The DKG stores both faithfully and has no built-in mechanism for resolution. You need a trust graph on top.
A four-layer architecture
1. Agents publish claims. They don’t define what’s true. They’re just speakers.
2. DKG stores claims as signed, timestamped Knowledge Assets. Tells you who said what, when, with cryptographic proof. Does not tell you what’s true.
3. Hierarchy defines scope of authority. Which domain a claim falls under. Whether that domain is binding, scoped, or advisory. A claim outside its agent’s scope gets flagged before it can compete with a binding one.
4. Trust graph adjudicates. Two flavors:
- Human in the loop: Domain experts arbitrate. Slow, accountable, legally defensible. The ruling gets written back into the DKG as a signed Knowledge Asset, preserving the audit trail.
Agentic: Reputation oracles with staked bonds resolve disputes via optimistic mechanisms — claims stand unless challenged within a window, bad challenges slash stake. Fast, scalable, gameable.
The DKG layer is identical in both versions. What differs is what sits on top — and that’s where the unsolved problems live.
Sybil resistance is necessary but not sufficient
You can’t decentralize trust by adding cryptography. You also have to prevent attackers from spawning a thousand fake agents to vote themselves into authority.
Bitcoin solves Sybil resistance with proof of work and game theory. Make dishonesty expensive. You can spawn unlimited wallets for free, but you can’t spawn unlimited hashpower. The cost is external, and that’s what makes it work.
But Bitcoin has one job: decide which chain is canonical. Sybil resistance for transaction ordering is a narrow problem with a single answer. The agent trust graph has to answer something much harder — which claim is canonical, in which domain, against which authority hierarchy. That’s at least four jobs, and PoW only addresses the prerequisite to one of them.
No single Sybil mechanism is sufficient. A defensible design layers them:
- Proof of identity at the root identity layer — one human, one root
- Proof of stake at the publishing layer — bonded claims, slashable
- Reputation accrual as a weighting function — track record amplifies stake
- Domain scoping from the hierarchy — an agent staked in engineering can’t vote on legal canon
The Sybil cost becomes: acquire a verified human identity, post a bond, build reputation in the relevant domain, and operate within scope. Each layer raises a different kind of cost, so attackers can’t optimize against just one.
No single mechanism survives an unbounded adversary; the security comes from the composition. Satoshi had a unique understanding of this when he designed Bitcoin.
DIDs aren’t enough
Decentralized identifiers and verifiable credentials solve who is speaking which is real progress. But the harder questions sit above the identity layer:
- Who has standing to speak about this?
- Whose word binds when two authorities conflict?
- What happens when canon itself needs to change?
Not identity problems, these are governance problems. Different mechanisms entirely, and the field doesn’t have agreed answers yet.
The honest conclusion
Bitcoin’s elegance is that it has one job, and the protocol does that one job better than any alternative. A trust graph for agent reasoning has at least four jobs — identity, scope, adjudication, revision — and until they’re all addressed at protocol level, the architecture is insecure for any decision that actually matters.
With humans in the loop the problem is more tractable. We have organizational, legal, and accountability mechanisms stress-tested for centuries. Slow, expensive, doesn’t scale to agent throughput — but it can work, and the failure modes are understood.
For fully autonomous decentralized agents deciding what’s true with no human review, we don’t have the equivalent mechanisms. The pitches that imply we do are selling plumbing as adjudication. Worth being specific about which layer of the stack you’re actually shipping.
The substrate is real. The trust graph on top of it is the real hard work that’s left.


