Sync Guide
The dual-CRDT model
Section titled “The dual-CRDT model”xNet uses two complementary sync strategies, each optimized for a different data shape:
| Data type | Mechanism | Conflict resolution |
|---|---|---|
| Rich text (editor content) | Yjs CRDT | Character-level merge — no conflicts by design |
| Structured data (properties) | NodeStore with Lamport clocks | Field-level Last-Writer-Wins (LWW) |
Both strategies sync over the same multiplexed WebSocket connection.
Yjs for rich text
Section titled “Yjs for rich text”Each node with editor content gets a Y.Doc. Yjs handles character-level CRDT merging internally — two users typing in the same paragraph at the same time will see their edits merge without conflict. The sync protocol exchanges state vectors and diffs:
- Peer A sends sync-step1 (its state vector)
- Peer B responds with sync-step2 (the diff A is missing), plus its own sync-step1
- After the initial exchange, incremental updates flow as
sync-updatemessages
NodeStore for structured data
Section titled “NodeStore for structured data”Property changes (title, status, assignee, etc.) go through the Change<T> pipeline:
interface Change<T> { id: string type: string payload: T hash: ContentId // cid:blake3:... content-addressed parentHash: ContentId | null // Hash chain linkage authorDID: DID signature: Uint8Array // Ed25519 over hash lamport: LamportTimestamp // Causal ordering}Each change is signed with Ed25519, content-addressed with BLAKE3, and linked into a hash chain. Conflicts are resolved by comparing Lamport timestamps — higher time wins, with DID string as a deterministic tie-breaker.
MetaBridge: the one-way bridge
Section titled “MetaBridge: the one-way bridge”The MetaBridge keeps the Y.Doc’s metadata map in sync with NodeStore properties, but the data flow is intentionally one-directional:
NodeStore → MetaBridge → Y.Doc meta map (write)Y.Doc meta → Editor UI (read-only display)Editor UI → mutate() → NodeStore (signed writes)This prevents malicious Yjs updates from poisoning structured data. Property changes always go through mutate() → signed Change pipeline, never through the Yjs document directly.
Yjs security
Section titled “Yjs security”Every Yjs update transmitted over the network is wrapped in a SignedYjsEnvelope:
interface SignedYjsEnvelope { update: Uint8Array // Raw Yjs update bytes authorDID: string // did:key:... signature: Uint8Array // Ed25519 over BLAKE3(update) timestamp: number clientId: number // Yjs clientID bound to this DID}Signing and verification flow
Section titled “Signing and verification flow”flowchart LR
subgraph Signing
A["Yjs update bytes"] --> B["BLAKE3 hash"]
B --> C["Ed25519 sign\n(private key)"]
C --> D["SignedYjsEnvelope\n+ DID, clientId, ts"]
end
subgraph Verification
E["Receive envelope"] --> F["Parse DID →\npublic key"]
E --> G["BLAKE3 hash\nupdate bytes"]
F --> H["Ed25519 verify\nsig vs hash"]
G --> H
H -->|valid| I["Apply update"]
H -->|invalid| J["Reject + penalize"]
end
D -.->|network| E
Signing: BLAKE3 hash the raw Yjs update bytes → Ed25519 sign the hash with the author’s private key → attach envelope metadata (DID, clientId, timestamp).
Verification: Parse the author’s DID to extract their Ed25519 public key (self-certifying — no resolver needed) → BLAKE3 hash the update bytes → verify the Ed25519 signature against the hash.
Rate limiting
Section titled “Rate limiting”The YjsRateLimiter enforces per-peer limits:
| Limit | Value |
|---|---|
| Max update size | 1 MB |
| Updates per second | 30 (+ 10 burst) |
| Updates per minute | 600 |
| Max document size | 50 MB |
| Sync chunk size | 256 KB |
Large initial syncs are automatically chunked into 256 KB pieces and reassembled on the receiving end.
Peer scoring
Section titled “Peer scoring”The YjsPeerScorer tracks peer behavior. Peers start at score 100:
| Violation | Penalty |
|---|---|
| Invalid signature | -30 (auto-block after 3) |
| Unsigned update | -20 |
| Unattested clientId | -15 |
| Oversized update | -10 |
| Rate exceeded | -5 |
Score thresholds: warn at 50, throttle at 30, block at 10. Peers recover +1 per tick if they have no violations for 60 seconds.
ClientID attestation
Section titled “ClientID attestation”Yjs assigns random integer clientIDs with no identity binding. xNet fixes this with signed attestations:
interface ClientIdAttestation { clientId: number did: string signature: Uint8Array // Ed25519 over BLAKE3("clientid-bind:{clientId}:{did}:{room}:{expiresAt}") expiresAt: number room: string}A ClientIdMap maintains bidirectional clientId ↔ DID mappings. When an envelope arrives, validateClientIdOwnership checks that the envelope’s clientID matches the attested DID.
Update batching
Section titled “Update batching”The YjsBatcher collects individual Yjs updates (around 5/second from keystrokes) and flushes them in batches every 2 seconds (or at most 50 updates per batch). This reduces signature operations from ~5/sec to ~0.5/sec. Batches flush early on paragraph breaks.
Hash-at-rest integrity
Section titled “Hash-at-rest integrity”Persisted Yjs state is BLAKE3-hashed before writing to storage. On load, the hash is re-verified to detect storage-level corruption before it propagates to peers. After 100 incremental updates or 1 hour, the document is compacted (re-encoded from scratch).
SyncManager
Section titled “SyncManager”The SyncManager is the top-level orchestrator that wires everything together.
Configuration
Section titled “Configuration”interface SyncManagerConfig { nodeStore: NodeStore storage: NodeStorageAdapter signalingUrl: string // WebSocket URL, e.g., 'wss://hub.xnet.dev' poolSize?: number // Default: 50 warm documents trackTTL?: number // Default: 7 days authorDID?: string blobStore?: BlobStoreForSync}Lifecycle
Section titled “Lifecycle”start → connected → drainOfflineQueue → sync ↕ disconnected → enqueueLocally ↕stop → flushDirtyDocs → pruneRegistry → savestart()— Load registry and offline queue from storage, connect WebSocket, join rooms for all tracked nodes, start blob sync.- On connected — Drain offline queue (broadcast queued updates), re-send sync-step1 for all pooled documents.
acquire(nodeId)— Load Y.Doc from pool or storage, set up broadcast listener, join room, send sync-step1. Used byuseNode.release(nodeId)— Decrement refcount. Doc stays warm in the pool and continues syncing in the background.track(nodeId, schemaId)— Add to the persistent registry for background sync. Tracked nodes are synced even when no component has them open.stop()— Leave all rooms, disconnect, flush dirty docs to storage, prune expired registry entries, save state.
NodePool
Section titled “NodePool”The pool manages Y.Doc instances in three states:
- Active (refCount > 0) — Currently used by a component. Never evicted.
- Warm (refCount = 0) — Released but kept in memory for quick re-acquisition. Evicted via LRU when pool is full.
- Cold — Evicted to storage. Loaded on next
acquire().
Dirty documents are persisted on a debounced 2-second timer.
Registry
Section titled “Registry”The persistent tracked-node set that survives app restarts:
interface TrackedNode { nodeId: string schemaId: string lastOpened: number lastSynced: number pinned: boolean // Pinned nodes never expire}Tracked nodes with no activity for trackTTL (default 7 days) are pruned on shutdown, unless pinned.
OfflineQueue
Section titled “OfflineQueue”When the WebSocket is disconnected, local Y.Doc updates are enqueued:
interface OfflineQueue { enqueue(nodeId: string, update: Uint8Array): Promise<void> drain(handler): Promise<number> readonly size: number}- Max 1000 entries (FIFO — oldest dropped if full)
- Persisted immediately on enqueue for crash resilience
- Drained in order on reconnect; stops on first error (entry stays at front for retry)
Transport
Section titled “Transport”ConnectionManager
Section titled “ConnectionManager”A single multiplexed WebSocket connection handles all documents (O(1) connections, not O(N) per doc):
- Room-based pub/sub:
subscribe(topics),publish(topic, data) - Auto-reconnect with configurable delay and max attempts
- On reconnect: automatically re-subscribes to all rooms
WebSocketSyncProvider
Section titled “WebSocketSyncProvider”A per-document provider that implements the Yjs sync protocol over the shared connection:
- On connect — subscribe to room, broadcast sync-step1 (state vector)
- On sync-step1 from peer — respond with sync-step2 (diff)
- On sync-step2 — apply update, mark synced
- On sync-update — apply incremental update
- Awareness updates (cursor position, user name) flow alongside document updates
Lamport clocks
Section titled “Lamport clocks”xNet uses Lamport clocks for causal ordering of structured data changes:
interface LamportTimestamp { time: number author: DID}tick(clock)— increment time by 1, return[newClock, timestamp]receive(clock, receivedTime)— set time tomax(ours, received)(no increment)compareLamportTimestamps(a, b)— compare by time first, then DID string as tie-breakerserializeTimestamp()— zero-padded 16-digit time + DID for lexicographic sorting- All functions are pure — they return new objects, never mutate
Hash chains
Section titled “Hash chains”Changes link to their parent via parentHash, forming a directed acyclic graph:
validateChain()— verify all hashes are correctdetectFork()— find where two changes share a parenttopologicalSort()— deterministic replay order (parents before children)getForks()— returns detailed fork info with branches sorted by Lamport
Forks are valid — they represent concurrent edits, not errors. Field-level LWW resolves them deterministically.
Debugging
Section titled “Debugging”Enable debug logging in the browser console:
localStorage.setItem('xnet:sync:debug', 'true')The DevTools sync panel shows:
- Connection status and peer count
- Document pool state (active, warm, cold)
- Registry entries with last-synced timestamps
- Offline queue size
- Per-peer scoring data
Use pnpm dev:both in the Electron app to run two instances for testing sync behavior locally.