Zero is Not Local-First. It's Better.
If you've used local-first libraries before, you'll know the pitch. The idea is that the data your app needs is synced to the browser or device and stored locally, so that all reads and writes happen locally, with changes syncing to the server when possible. This gives an offline-capable experience, where the app always feels fast and responsive.
But there is a catch. Many of these libraries want to sync everything.
That sounds nice in theory but in practice, it makes it hard to build apps with a lot of data. For example, something like a Twitter "For You" tab, where we don't want to bring the entire dataset into the client.
Zero is a sync engine. It helps us keep data in sync between the client and server, but it does not aim to turn our app into an offline-first experience where all data is available locally at all times. It takes a different approach.
In fact, if you're coming from a typical React stack, a better mental model is to think of Zero as a potential replacement for tools like TanStack Query or SWR. It gives us a way to query server data (server as source-of-truth) with built-in caching and live updates, without the boilerplate of manual revalidation or trying to mirror server state on the client for optimistic updates.
It's not really local-first. It's partial sync first.
Partial sync
In Zero, we query the data we want, and that is all that gets synced. For example, if we query for a single todo item by ID:
const [todo] = useQuery(z.query.todos.where('id', '=', id));
Zero will sync that single todo item into its local cache. It will not sync the entire list of todos unless we query for that explicitly.
The lifetime of this data is tied to the query itself. While our component is using the query, the data is cached and syncing. When the query is unmounted, Zero can clean up the unused data in its local cache.
This model gives us more control over what is stored locally, which is by design. Storing too much data in the browser can hit IndexedDB limits, slow down initial syncs, and cause excessive memory usage.
By syncing only the data we're actively using, we can keep the app fast and responsive, but this raises a question when navigating between pages. If a page requires data that is not yet available, we might see a visible flicker. To help with this, Zero supports preloading, which allows us to prepare data ahead of time.
Preloading
Zero's preload()
API looks like this:
const preloadTodos = z.query.todos.limit(50).preload({ ttl: '1d' });
Preloading stores the data in the local cache (e.g. IndexedDB) and continually syncs, but does not materialise it into JavaScript objects until an equivalent useQuery
mounts. This helps keep memory usage low while still ensuring data is ready for fast display. To deactivate the preload syncing we can call preloadTodos.cleanup()
.
One pattern I've landed on is to export a preload
function from each page in the app, which calls the preloaders for any queries on that page:
// preloader for a page that renders todos and lists
export const preloadPage = () =>
preloaders((z) => [
z.query.lists.limit(50).preload({ ttl: '1d' }),
z.query.todos.limit(50).preload({ ttl: '1d' }),
]);
// a shared utility for composing preloaders:
export const preloaders = (withPreloaders) => {
const z = getZero();
const preloaders = withPreloaders(z);
const cleanup = () => preloaders.map((p) => p.cleanup());
const complete = (async () => {
const promise = Promise.all(preloaders.map((p) => p.complete));
await Promise.race([promise, timeout(5000)]);
})();
return { complete, cleanup };
};
We can then compose a custom Link
component that triggers the preload on hover or with IntersectionObserver
, and automatically deactivates it when unmounted.
import { preloadPage } from './page';
export const Pagelink = (props) => (
<Link {...props} href="/page" prefetcher={preloadPage} />
);
This way, by the time the user clicks the link, the data is more likely to be in the local cache and ready to render. To improve that likelihood, we can use a React transition to ensure the navigation waits for the preload to complete.
const Link = ({ prefetcher, ...props }) => {
const [isPending, startTransition] = React.useTransition();
const prefetcherRef = React.useRef();
const router = useRouter();
// preload when link intersects
// https://usehooks-ts.com/react-hook/use-intersection-observer
const { linkRef } = useIntersectionObserver({
onChange: (isIntersecting) => {
if (isIntersecting) {
prefetcherRef.current = prefetcher?.();
} else {
prefetcherRef.current?.cleanup();
prefetcherRef.current = undefined;
}
},
});
React.useEffect(() => {
const prefetcher = prefetcherRef.current;
return () => prefetcher?.cleanup();
}, []);
return (
<FrameworkLink
{...props}
ref={linkRef}
className={cn(isPending && 'opacity-50', props.className)}
onClick={(event) => {
props.onClick?.(event);
event.preventDefault();
// wait for data to preload before navigating
startTransition(async () => {
if (prefetcherRef.current) await prefetcherRef.current.complete;
router.push(props.href);
});
}}
/>
);
};
I've dimmed the link opacity during the transition in this example, but a central global spinner could be used instead. With this set up, it should be rare that users ever see a loading state despite the partial syncing.
Mutations and Optimistic UI
So far, we've talked about reading and preloading data. But what about writing data? This is where Zero's approach to mutations is really interesting. They're isomorphic: we define mutators once and they run on both client and server!
To create mutators they suggest creating a createMutator
function that returns your mutators. On the server, we expose a push handler endpoint that calls this function. On the client, we call it when instantiating Zero so local writes can run immediately.
Here is an example of a custom upsert mutator:
const createMutators = () => ({
todo: {
upsert: (tx, input) => {
const now = Date.now();
const existing = await tx.query.todos
.where('id', '=', input.id)
.one()
.run();
if (existing) {
await tx.mutate.todos.update({ ...input, updatedAt: now });
} else {
await tx.mutate.todos.insert({
...input,
createdAt: now,
updatedAt: now,
});
}
},
},
});
Zero has a built in upsert
API for mutations that don't require a createdAt
field, but I specifically chose this example to highlight something neat about isomorphic mutators.
When we call z.mutate.todo.upsert
, the client runs the mutation locally first. If the todo is not present locally, it inserts it so the UI updates straight away. The same mutator then runs on the server, but if the todo already exists on the server, it updates it.
If we need to add some client or server only logic here, there is a reason
property on the transaction (tx.reason === 'optimistic'
) that we can use to branch our logic. Alternatively, we can create a createServerMutators
function that composes the client mutators and extends them for server only logic requiring access to secrets.
This approach allows us to write mutators once without needing separate client and server logic, and handles cases where the client and server state may be slightly out of sync during the window when an optimistic update is applied. It makes it much easier to implement reliable optimistic UIs, without race conditions.
Conflict resolution
Zero's conflict resolution is not currently designed to handle apps that are offline for long periods. If our app goes offline, Zero will retry pending mutators, and eventually throw when a retry threshold is reached. Supporting fully offline writes with conflict resolution often introduces more complexity than it's worth.
Instead, if mutators eventually fail, Zero will revert any optimistic changes locally. This ensures that our UI remains consistent with the server once connectivity is restored.
Server Reconciliation
Many local-first frameworks treat the client as the source of truth, or even go as far as encrypting all data so that the server cannot see it. That's great when the goal is complete privacy or offline-first, but it makes debugging and consistency trickier.
Zero flips this. It keeps the client fast and responsive, while letting the server remain the source of truth. It feels more like what we're used to in traditional web stacks.
It borrows a technique from the game industry called Server Reconciliation, where the client sends inputs and the server derives the next state of things. You can read about it in this article (Reflect was the predecessor to Zero and used the same technique).
Bring your own backend
Zero is built to integrate with your existing backend. While some local-first solutions or sync engines require opting into their infrastructure or custom database, Zero has no vendor lock-in and no custom database to adopt. We can use our own.
Currently, Zero ships with first-class Postgres support, and there are experimental adapters for MongoDB and Materialize, with more to come.
For teams already invested in their stack or building with portability in mind, this flexibility means we get the benefits of a local-first solution, without giving up our existing database, tooling, or hosting setup.
Conclusion
It is worth noting that Zero is still in alpha. We can follow their roadmap for progress towards the upcoming beta release. The current API is already very usable, but it is a fast-evolving project.
Today, Zero introduces a very useful model for building modern apps.
- Partial sync lets us build apps like Reddit without overloading the local cache.
- Server as source-of-truth, aiding debugging and ensuring consistency across clients.
- Preload APIs help us improve performance for rendered data.
- Optimistic updates require a fraction of the boilerplate, thanks to isomorphic mutators.
- The UI stays reactive and consistent, with live updates from the server.
- No vendor lock-in or custom database, use your own stack.
It's not local first. It's partial sync first. And for many projects, it's a better fit.