Skip to content

Sync Guide

xNet uses two complementary sync strategies, each optimized for a different data shape:

Data typeMechanismConflict resolution
Rich text (editor content)Yjs CRDTCharacter-level merge — no conflicts by design
Structured data (properties)NodeStore with Lamport clocksField-level Last-Writer-Wins (LWW)

Both strategies sync over the same multiplexed WebSocket connection.

Each node with editor content gets a Y.Doc. Yjs handles character-level CRDT merging internally — two users typing in the same paragraph at the same time will see their edits merge without conflict. The sync protocol exchanges state vectors and diffs:

  1. Peer A sends sync-step1 (its state vector)
  2. Peer B responds with sync-step2 (the diff A is missing), plus its own sync-step1
  3. After the initial exchange, incremental updates flow as sync-update messages

Property changes (title, status, assignee, etc.) go through the Change<T> pipeline:

interface Change<T> {
id: string
type: string
payload: T
hash: ContentId // cid:blake3:... content-addressed
parentHash: ContentId | null // Hash chain linkage
authorDID: DID
signature: Uint8Array // Ed25519 over hash
lamport: LamportTimestamp // Causal ordering
}

Each change is signed with Ed25519, content-addressed with BLAKE3, and linked into a hash chain. Conflicts are resolved by comparing Lamport timestamps — higher time wins, with DID string as a deterministic tie-breaker.

The MetaBridge keeps the Y.Doc’s metadata map in sync with NodeStore properties, but the data flow is intentionally one-directional:

NodeStore → MetaBridge → Y.Doc meta map (write)
Y.Doc meta → Editor UI (read-only display)
Editor UI → mutate() → NodeStore (signed writes)

This prevents malicious Yjs updates from poisoning structured data. Property changes always go through mutate() → signed Change pipeline, never through the Yjs document directly.

Every Yjs update transmitted over the network is wrapped in a SignedYjsEnvelope:

interface SignedYjsEnvelope {
update: Uint8Array // Raw Yjs update bytes
authorDID: string // did:key:...
signature: Uint8Array // Ed25519 over BLAKE3(update)
timestamp: number
clientId: number // Yjs clientID bound to this DID
}
flowchart LR
  subgraph Signing
    A["Yjs update bytes"] --> B["BLAKE3 hash"]
    B --> C["Ed25519 sign\n(private key)"]
    C --> D["SignedYjsEnvelope\n+ DID, clientId, ts"]
  end

  subgraph Verification
    E["Receive envelope"] --> F["Parse DID →\npublic key"]
    E --> G["BLAKE3 hash\nupdate bytes"]
    F --> H["Ed25519 verify\nsig vs hash"]
    G --> H
    H -->|valid| I["Apply update"]
    H -->|invalid| J["Reject + penalize"]
  end

  D -.->|network| E

Signing: BLAKE3 hash the raw Yjs update bytes → Ed25519 sign the hash with the author’s private key → attach envelope metadata (DID, clientId, timestamp).

Verification: Parse the author’s DID to extract their Ed25519 public key (self-certifying — no resolver needed) → BLAKE3 hash the update bytes → verify the Ed25519 signature against the hash.

The YjsRateLimiter enforces per-peer limits:

LimitValue
Max update size1 MB
Updates per second30 (+ 10 burst)
Updates per minute600
Max document size50 MB
Sync chunk size256 KB

Large initial syncs are automatically chunked into 256 KB pieces and reassembled on the receiving end.

The YjsPeerScorer tracks peer behavior. Peers start at score 100:

ViolationPenalty
Invalid signature-30 (auto-block after 3)
Unsigned update-20
Unattested clientId-15
Oversized update-10
Rate exceeded-5

Score thresholds: warn at 50, throttle at 30, block at 10. Peers recover +1 per tick if they have no violations for 60 seconds.

Yjs assigns random integer clientIDs with no identity binding. xNet fixes this with signed attestations:

interface ClientIdAttestation {
clientId: number
did: string
signature: Uint8Array // Ed25519 over BLAKE3("clientid-bind:{clientId}:{did}:{room}:{expiresAt}")
expiresAt: number
room: string
}

A ClientIdMap maintains bidirectional clientId ↔ DID mappings. When an envelope arrives, validateClientIdOwnership checks that the envelope’s clientID matches the attested DID.

The YjsBatcher collects individual Yjs updates (around 5/second from keystrokes) and flushes them in batches every 2 seconds (or at most 50 updates per batch). This reduces signature operations from ~5/sec to ~0.5/sec. Batches flush early on paragraph breaks.

Persisted Yjs state is BLAKE3-hashed before writing to storage. On load, the hash is re-verified to detect storage-level corruption before it propagates to peers. After 100 incremental updates or 1 hour, the document is compacted (re-encoded from scratch).

The SyncManager is the top-level orchestrator that wires everything together.

interface SyncManagerConfig {
nodeStore: NodeStore
storage: NodeStorageAdapter
signalingUrl: string // WebSocket URL, e.g., 'wss://hub.xnet.dev'
poolSize?: number // Default: 50 warm documents
trackTTL?: number // Default: 7 days
authorDID?: string
blobStore?: BlobStoreForSync
}
start → connected → drainOfflineQueue → sync
disconnected → enqueueLocally
stop → flushDirtyDocs → pruneRegistry → save
  1. start() — Load registry and offline queue from storage, connect WebSocket, join rooms for all tracked nodes, start blob sync.
  2. On connected — Drain offline queue (broadcast queued updates), re-send sync-step1 for all pooled documents.
  3. acquire(nodeId) — Load Y.Doc from pool or storage, set up broadcast listener, join room, send sync-step1. Used by useNode.
  4. release(nodeId) — Decrement refcount. Doc stays warm in the pool and continues syncing in the background.
  5. track(nodeId, schemaId) — Add to the persistent registry for background sync. Tracked nodes are synced even when no component has them open.
  6. stop() — Leave all rooms, disconnect, flush dirty docs to storage, prune expired registry entries, save state.

The pool manages Y.Doc instances in three states:

  • Active (refCount > 0) — Currently used by a component. Never evicted.
  • Warm (refCount = 0) — Released but kept in memory for quick re-acquisition. Evicted via LRU when pool is full.
  • Cold — Evicted to storage. Loaded on next acquire().

Dirty documents are persisted on a debounced 2-second timer.

The persistent tracked-node set that survives app restarts:

interface TrackedNode {
nodeId: string
schemaId: string
lastOpened: number
lastSynced: number
pinned: boolean // Pinned nodes never expire
}

Tracked nodes with no activity for trackTTL (default 7 days) are pruned on shutdown, unless pinned.

When the WebSocket is disconnected, local Y.Doc updates are enqueued:

interface OfflineQueue {
enqueue(nodeId: string, update: Uint8Array): Promise<void>
drain(handler): Promise<number>
readonly size: number
}
  • Max 1000 entries (FIFO — oldest dropped if full)
  • Persisted immediately on enqueue for crash resilience
  • Drained in order on reconnect; stops on first error (entry stays at front for retry)

A single multiplexed WebSocket connection handles all documents (O(1) connections, not O(N) per doc):

  • Room-based pub/sub: subscribe(topics), publish(topic, data)
  • Auto-reconnect with configurable delay and max attempts
  • On reconnect: automatically re-subscribes to all rooms

A per-document provider that implements the Yjs sync protocol over the shared connection:

  1. On connect — subscribe to room, broadcast sync-step1 (state vector)
  2. On sync-step1 from peer — respond with sync-step2 (diff)
  3. On sync-step2 — apply update, mark synced
  4. On sync-update — apply incremental update
  5. Awareness updates (cursor position, user name) flow alongside document updates

xNet uses Lamport clocks for causal ordering of structured data changes:

interface LamportTimestamp {
time: number
author: DID
}
  • tick(clock) — increment time by 1, return [newClock, timestamp]
  • receive(clock, receivedTime) — set time to max(ours, received) (no increment)
  • compareLamportTimestamps(a, b) — compare by time first, then DID string as tie-breaker
  • serializeTimestamp() — zero-padded 16-digit time + DID for lexicographic sorting
  • All functions are pure — they return new objects, never mutate

Changes link to their parent via parentHash, forming a directed acyclic graph:

  • validateChain() — verify all hashes are correct
  • detectFork() — find where two changes share a parent
  • topologicalSort() — deterministic replay order (parents before children)
  • getForks() — returns detailed fork info with branches sorted by Lamport

Forks are valid — they represent concurrent edits, not errors. Field-level LWW resolves them deterministically.

Enable debug logging in the browser console:

localStorage.setItem('xnet:sync:debug', 'true')

The DevTools sync panel shows:

  • Connection status and peer count
  • Document pool state (active, warm, cold)
  • Registry entries with last-synced timestamps
  • Offline queue size
  • Per-peer scoring data

Use pnpm dev:both in the Electron app to run two instances for testing sync behavior locally.