Skip to content

Offline Patterns

xNet apps don’t fetch data from a server — they read from a local database (SQLite, OPFS-backed in modern browsers). This means:

  • Queries are instant — no loading spinners for cached data
  • Mutations are immediate — writes go to local storage first
  • The app works offline — no network required for core functionality
  • Sync is eventual — changes propagate to peers when a connection is available

You don’t need to add special offline handling. The architecture is offline-first from the ground up.

User edits → Local Y.Doc update → Broadcast via WebSocket → Peers apply
→ Persist to SQLite
User edits → Local Y.Doc update → OfflineQueue (persisted)
→ Persist to SQLite
WebSocket connects → OfflineQueue drains → Broadcast queued updates
→ Exchange state vectors → Merge remote changes

The OfflineQueue holds up to 1000 entries, persisted to SQLite immediately for crash resilience. On reconnect, entries are drained in order. If a broadcast fails, the entry stays at the front of the queue for retry.

Since mutations write to local storage first, the UI updates immediately. No optimistic update logic is needed — the data is already local:

function TaskList() {
const { data: tasks } = useQuery(TaskSchema)
const mutate = useMutate()
const addTask = () => {
// This writes locally and returns immediately
mutate.create(TaskSchema, { title: 'New task', status: 'todo' })
// useQuery re-renders with the new task — no loading state
}
return (
<div>
<button onClick={addTask}>Add Task</button>
{tasks.map((t) => (
<TaskRow key={t.id} task={t} />
))}
</div>
)
}
import { useSyncManager } from '@xnetjs/react'
function NetworkBadge() {
const sync = useSyncManager()
return (
<span>
{sync.status === 'connected' ? 'Online' : 'Offline'}
{sync.queueSize > 0 && ` (${sync.queueSize} pending)`}
</span>
)
}

Track nodes so they sync even when no component has them open:

const sync = useSyncManager()
// Pin a node — it syncs in the background and never expires from the registry
sync.track(nodeId, schemaId)

Tracked nodes stay in the sync registry for 7 days by default. Pinned nodes never expire.

When offline changes merge with remote changes, both Yjs (rich text) and the NodeStore (properties) resolve conflicts automatically:

  • Rich text — Yjs CRDT merges character-by-character. Both users’ edits appear.
  • Properties — Field-level LWW. The change with the higher Lamport timestamp wins. If two users edit different fields on the same node, both changes are preserved.

There are no merge conflict dialogs. The system is designed so that all peers converge to the same state deterministically.

For forms that create multiple related nodes, use mutate([...]) to execute a batched write:

const { mutate } = useMutate()
const createProjectWithTasks = async () => {
const result = await mutate([
{ type: 'create', schema: ProjectSchema, id: 'project-acme', data: { name: 'Acme' } },
{ type: 'create', schema: TaskSchema, data: { title: 'Setup', project: 'project-acme' } },
{ type: 'create', schema: TaskSchema, data: { title: 'Launch', project: 'project-acme' } }
])
// All three writes run through one mutate call.
// The known project ID keeps references stable while offline and after sync.
}

xNet uses SQLite through the @xnetjs/storage adapter. Data is organized per-node:

  • Y.Doc state — Full Yjs document state, BLAKE3-hashed for integrity
  • Node properties — Structured data from the NodeStore
  • Offline queue — Pending updates waiting to sync
  • Registry — Tracked node set with last-synced timestamps
  • Blobs — File attachments stored locally

The storage layer handles serialization, compression, and integrity checks. You don’t interact with SQLite directly.

ResourceLimit
Offline queue1000 entries (FIFO, oldest dropped)
Y.Doc size50 MB per document
Warm document pool50 documents in memory
Registry TTL7 days (configurable)

If the offline queue fills up during an extended offline period, the oldest entries are dropped. The Y.Doc state in SQLite is still preserved — only the incremental updates that haven’t been broadcast are lost. On reconnect, a full state-vector exchange with peers will reconcile any gaps.