We Built Our Own Hot Reload Because Bun Doesn't Have One (And It's Better)

You change a file. You switch to the browser. You hit refresh.
Nothing changed.
You check the terminal. The build is still running. You wait. You refresh again. Still the old version. You clear the cache. Refresh. Now it works. But the page flash is so harsh you lose your place.
This is the developer experience tax that kills productivity. Not in minutes per incident — in seconds. But those seconds compound across hundreds of saves per day, and the cognitive disruption of broken flow is worse than the time itself.
When we chose Bun as our runtime for the mentoring platform at mentoring.oakoliver.com, we knew we were giving up the hot module replacement (HMR) that tools like Vite and webpack provide out of the box. Bun has a blazing-fast bundler, but no development server with HMR built in.
Most teams would reach for Vite anyway and use Bun only as a runtime. We didn't. We built our own file-watching, hot-rebuild system from scratch.
And honestly? It's better than what we had before.
I – Why Not Just Use Vite?
Fair question. Vite is excellent. I've used it on plenty of projects. But for the mentoring platform, it was the wrong tool.
Our application has a specific architecture: three separate SPA bundles (public, auth, private) compiled by Bun's native bundler at server startup. The Elysia.js server selects which bundle to serve based on the URL path. This isn't a standard single-SPA setup — it's a custom code-splitting strategy where the server decides the entry point.
Vite assumes it is the server. It serves the HTML, it injects the HMR client, it manages the module graph. But in our architecture, Elysia is the server. It handles routing, API calls, SSG hydration, and bundle selection. Inserting Vite into this pipeline would mean either giving up control of the server or running two servers side by side.
We tried the two-server approach. It was a mess. CORS issues between the API server and Vite's dev server. Port conflicts. Race conditions where the API server started before Vite finished its initial build. Session cookies scoped to one port not being sent to the other.
The fundamental problem: our architecture treats the bundler as a library, not a framework. Bun.build() is a function we call. Vite is a framework we'd have to bend our architecture around.
So we built the file watcher ourselves.
II – The Three-Bundle Architecture
Before diving into the watcher, you need to understand what it's watching.
The mentoring platform compiles three bundles at startup:
The public bundle. Homepage, mentor listings, about page, FAQ. This is the "marketing site" bundle. It's lightweight — about 400KB. Any visitor, authenticated or not, might load this.
The auth bundle. Sign-in, sign-out, email verification, magic link landing. About 150KB. Loaded only when users are in the auth flow.
The private bundle. Dashboard, profile management, messaging, session pages, billing. This is the full application. About 1.5MB. Loaded only for authenticated users.
Each bundle has its own entry point file. Each imports a different set of components. Some components are shared across bundles (the design system, for instance), but most are bundle-specific.
When a file changes during development, we need to know which bundle it affects. Rebuilding all three bundles for every file change would be wasteful — the public bundle doesn't care if you changed a dashboard component. The auth bundle doesn't care about the mentor listing page.
This selective rebuild requirement is what makes our watcher more sophisticated than a simple "rebuild everything on change" approach.
III – File System Events: The Unreliable Foundation
Every file watching system starts with the operating system's file system events. On macOS, that's FSEvents. On Linux, it's inotify. On Windows, it's ReadDirectoryChangesW.
All of them are unreliable in the same ways.
Rapid successive saves (like auto-save or format-on-save) trigger multiple events for a single logical change. Some editors don't modify files in place — they write to a temp file and rename it, generating delete and create events instead of a modify event. Network-mounted filesystems might not generate events at all.
Bun exposes file watching through its native fs.watch API, which wraps the OS-level mechanisms. It's fast and efficient, but it inherits all the platform-specific quirks.
The first lesson: never trust a single file system event. Always debounce. Always verify the file actually changed. And never assume the event type accurately describes what happened.
IV – Debouncing: The Art of Waiting Just Long Enough
When you hit save in your editor, multiple events fire.
If your editor has format-on-save enabled, it might save the file, then format it, then save again. That's at least four events (modify, modify, modify, modify) for one logical change. If you have a linter that auto-fixes on save, add a few more.
Without debouncing, each event triggers a rebuild. You'd get three or four consecutive rebuilds for a single save, each one invalidating the previous. The last rebuild is the only one that matters. The rest waste CPU time and — worse — create a flickering effect where the browser briefly loads a partially-rebuilt bundle.
Our debounce strategy:
When a file event arrives, we don't rebuild immediately. We start a timer. If another event arrives for the same bundle before the timer fires, we reset the timer. The rebuild only happens when the timer fires without interruption.
The debounce window is 150 milliseconds.
Why 150ms? Empirically determined. We measured the time between the first and last event for a single save operation across VS Code, Neovim, and WebStorm. The longest gap was about 120ms (VS Code with Prettier format-on-save). 150ms gives a comfortable margin.
Too short (50ms), and rapid saves still trigger multiple rebuilds. Too long (500ms), and the developer perceives the rebuild as sluggish. 150ms is the sweet spot — fast enough to feel instant, slow enough to catch all save-related events.
V – Dependency Mapping: Which Bundle Cares?
This is the core intelligence of the watcher.
When a file changes, we need to determine which of the three bundles depends on it. A change to a component in the dashboard page should only rebuild the private bundle. A change to a shared design system component should rebuild all three.
The naive approach: build a full dependency graph by parsing imports.
We considered this. It's what Vite does — it maintains a module graph that traces every import chain. But maintaining a live dependency graph has its own complexity: circular imports, dynamic imports, re-exports, barrel files. Each of these creates edge cases in graph traversal.
Our approach is simpler: directory-based heuristics plus an explicit override map.
The file structure of the application already encodes which bundle owns which files. Pages in the dashboard directory belong to the private bundle. Pages in the landing directory belong to the public bundle. Auth flow components belong to the auth bundle.
The watcher maps file paths to bundles using a set of prefix rules. If the changed file is under the pages/dashboard path, it's the private bundle. Under pages/landing, it's public. Under pages/auth, it's auth.
What about shared files? Components in the shared design system directory, utility functions, context providers — these could affect any bundle. For shared files, we rebuild all three bundles. Yes, this means a shared component change triggers three rebuilds instead of one. But shared components change infrequently compared to page-specific components, and the triple rebuild still completes in under 200ms total thanks to Bun's native bundler speed.
The explicit override map handles edge cases. If a file doesn't match any directory heuristic, the override map lets us manually assign it to a bundle (or to "all"). This covers one-off files that live in unusual locations.
Could we build a full dependency graph instead? Sure. But the directory-based approach has been correct for 99% of changes, and the 1% where it over-rebuilds costs us milliseconds, not seconds. Simplicity wins.
VI – The Rebuild Pipeline
Once the debounce timer fires and we know which bundle to rebuild, the pipeline kicks in.
Step 1: Invalidate the cache. The Elysia server caches compiled bundles in memory to avoid re-reading from disk on every request. When a rebuild starts, the cache entry for the affected bundle is invalidated. Any request that arrives during the rebuild will trigger a synchronous build (which we'll address in the next section).
Step 2: Call Bun.build(). Bun's bundler is invoked with the entry point for the affected bundle. This produces a new JavaScript file and optionally a source map. On our development machine (M1 MacBook), this takes 30-80ms for the public bundle, 15-40ms for the auth bundle, and 80-200ms for the private bundle.
Step 3: Update the cache. The newly compiled bundle is loaded into memory, replacing the invalidated entry. The hash of the new bundle is computed for cache-busting purposes.
Step 4: Log the result. A colored, formatted log line shows which bundle was rebuilt, how long it took, and the output size. Green for success, red for failure. The log includes the triggering file path so you can verify the watcher identified the correct bundle.
Total time from file save to fresh bundle available: 180-350ms. Debounce (150ms) plus build (30-200ms). This is perceptible but not disruptive. The browser refresh completes before your eyes finish traveling from the editor to the browser.
VII – Cache Invalidation: The Hardest Problem in Computer Science
The famous Phil Karlton quote lands different when you're implementing it at 2 AM.
Our caching strategy has three layers, and each needs its own invalidation approach.
Layer 1: Server memory cache. The compiled bundle JavaScript is held in a JavaScript variable. Invalidation is trivial — set the variable to null. The next request triggers a rebuild or waits for the ongoing one.
Layer 2: Browser cache. The browser caches the JavaScript bundle aggressively (because in production, bundles are content-hashed and immutable). In development, we need to bust this cache on every rebuild.
We do this with a query parameter approach. Each bundle URL includes a version parameter derived from the build timestamp. When the bundle is rebuilt, the version changes, and the next HTML response includes the new URL. The browser treats it as a different resource and fetches fresh.
Layer 3: Elysia's static file serving cache. Elysia can cache static file responses internally. During development, we disable this caching entirely. The performance difference is negligible (we're serving from memory, not disk), and it eliminates an entire category of "why is it still showing the old version" bugs.
The lesson: in development, cache nothing. In production, cache everything. The worst developer experience bugs come from stale caches, and no amount of "just hard refresh" advice makes them less frustrating.
VIII – Error Handling: When the Build Fails
Syntax errors happen. Import typos happen. Missing dependencies happen. The watcher needs to handle build failures gracefully.
Rule 1: Never crash the server. A failed build should not take down the Elysia process. The API, the other bundles, the SSE connections — everything else should keep running. The failing bundle simply doesn't update.
We wrap every Bun.build() call in a try-catch. A build failure logs the error with the full stack trace, colorized in red, and preserves the last successful bundle in the cache. The developer sees the error, fixes the file, saves again, and the watcher triggers a new build.
Rule 2: Show the error in the browser. This is the one place we went beyond our "no HMR" stance. When a build fails, the cached bundle is replaced with a minimal error display script that renders the error message directly in the browser. It's not pretty — just the error text in a monospace font on a red background. But it means you don't have to check the terminal to know the build failed.
When the next build succeeds, the error display is automatically replaced with the real bundle.
Rule 3: Don't log the same error twice. If the developer hasn't changed the file, the next file system event (from auto-save retries or editor background processes) shouldn't spam the terminal with the same error. We track the last error message per bundle and suppress duplicates.
IX – The Race Condition Nobody Warns You About
Here's a subtle bug that took us a day to find.
The file watcher detects a change. The debounce timer starts. During the debounce window, a browser request arrives for the bundle. The old cached version is gone (invalidated when the change was detected). The new version isn't built yet (debounce hasn't fired). What happens?
With a naive implementation: the request hangs or returns a 500.
Our solution: a build promise that multiple requesters can await.
When a change is detected and the cache is invalidated, we don't just set the cache to null. We set it to a Promise that resolves when the build completes. Any request that arrives during the build awaits this Promise. When the build finishes, all waiting requests resolve simultaneously with the fresh bundle.
This means the first request after a file change might take 180-350ms instead of 5ms. But it gets the correct, freshly-built bundle. And subsequent requests (for the same bundle version) get the cached result instantly.
This pattern — replacing a cached value with a Promise of the next value — is surprisingly useful. It eliminates an entire class of race conditions where multiple consumers need the same freshly-computed resource.
Want to see our build system in action?
The mentoring platform at mentoring.oakoliver.com uses this exact file-watching architecture in development. Three bundles, selective rebuilds, sub-second feedback loops — all powered by Bun's native bundler and a few hundred lines of watcher code.
If you're building developer tooling, migrating from webpack or Vite, or exploring Bun for production use, let's talk. Book a session at mentoring.oakoliver.com or check out more engineering deep-dives at oakoliver.com.
X – Watch Mode vs. Rebuild Mode
Our watcher operates in two modes, and switching between them was one of the best DX decisions we made.
Watch mode is what we've been discussing: the watcher monitors the filesystem, debounces changes, and selectively rebuilds. This is for active development — when you're writing code and want instant feedback.
Rebuild mode is for when you switch branches, pull changes, or install new dependencies. In these scenarios, you don't want selective rebuilds — you want a full clean build of all three bundles. Watch mode might miss some changes (git checkout doesn't always trigger filesystem events for every changed file), so you need a way to force a complete rebuild.
We trigger rebuild mode in two ways:
A keyboard shortcut in the terminal (pressing 'r' while the server is running) forces all three bundles to rebuild immediately, bypassing the watcher entirely.
Automatic detection: when we see a change to package.json, bun.lockb, or tsconfig.json, we automatically switch to rebuild mode for that cycle. These files indicate structural changes that affect everything.
The distinction matters because selective rebuilds save time but can miss cascading changes. If a shared type definition changes, the directory heuristic correctly identifies it as a "rebuild all" file. But if a dependency update changes the behavior of an imported module without changing any source files, only a full rebuild catches it.
XI – The Watcher's Lifecycle
The watcher isn't a standalone process. It lives inside the Elysia server process, started conditionally based on the environment.
In production: no watcher. Bundles are compiled once at startup and cached forever. The watcher code isn't even imported — the import is conditional on the NODE_ENV variable. This means production bundles don't include any watcher-related code.
In development: the watcher starts after the initial build.
The startup sequence is:
- Elysia server starts
- All three bundles are compiled (the initial build)
- The watcher initializes, registers filesystem listeners
- The server begins accepting requests
Why after the initial build? Because the watcher might detect changes that occurred while bundles were being built (like an editor auto-save during startup). If the watcher started first, it would trigger rebuilds that conflict with the initial build. By waiting, we ensure the initial build is clean and complete before the watcher takes over.
The watcher also gracefully shuts down when the server stops. All filesystem listeners are unregistered, pending debounce timers are cancelled, and in-progress builds are allowed to complete. This prevents orphaned file watchers that consume system resources after the server exits.
XII – Bun.build() As a Library: The Underrated Superpower
Most developers think of Bun's bundler as a CLI tool — something you run as a build step. But Bun.build() is a function. You call it from JavaScript. You pass options as an object. You get a result back.
This is fundamentally different from webpack, Rollup, or esbuild as CLI tools.
When the bundler is a function, it becomes a building block in your application. You can call it conditionally. You can call it in response to events. You can call it with dynamic options computed at runtime.
For our watcher, this means the rebuild pipeline is just a function call with a try-catch. No spawning child processes. No parsing CLI output. No IPC between processes. The watcher and the bundler live in the same process, share the same event loop, and communicate through regular JavaScript function calls.
The performance implication is significant. Spawning a child process to run a build takes 50-100ms on macOS before the build even starts. Calling Bun.build() directly has zero process-spawn overhead. The 30-80ms build times we see include only the actual bundling work — no startup tax.
This is why our hot-rebuild feels faster than tools that shell out to their bundler. The overhead isn't in the bundling. It's in the process lifecycle around it.
XIII – What We Didn't Build: Hot Module Replacement
Let me be clear about what our watcher doesn't do. It doesn't do HMR.
HMR replaces changed modules in the running application without a full page reload. It preserves component state, keeps you scrolled to the same position, and maintains form inputs. It's genuinely magical when it works.
We chose not to build it. Here's why.
HMR requires a client-side runtime that manages the module graph, patches changed modules, and re-renders affected components. This runtime adds complexity — both to the build output and to the debugging experience. When HMR goes wrong (and it does go wrong — stale closures, state desynchronization, style leaks), the bugs are bizarre and time-consuming to diagnose.
For the mentoring platform, a full page reload after a rebuild takes 300-500ms. The initial data is hydrated from the SSG cache via window.INITIAL_DATA, so there's no loading spinner — the page renders immediately with real data.
The cost of a full reload is 300-500ms. The cost of debugging a stale HMR state is 15-30 minutes. Over a week of development, the math is clear.
That said, we do preserve the URL across reloads. The browser refreshes to the same page you were on, which preserves most of the context you care about. And because our auth uses HTTP-only cookies, you stay logged in across reloads.
Our pragmatic stance: HMR is a luxury optimization for projects where full reloads are slow. Our full reloads are fast. The optimization doesn't justify the complexity.
XIV – Measuring Developer Experience
We track rebuild performance as part of our development metrics. Not because we're obsessed with benchmarks, but because developer experience degrades slowly, and you don't notice until it's painful.
Every rebuild logs three metrics:
Detection latency: Time between the file save and the watcher receiving the filesystem event. This is typically 5-20ms on macOS, but can spike to 200ms+ if the system is under heavy I/O load.
Build duration: Time for Bun.build() to complete. This increases as the bundle grows. When it crosses 300ms, we investigate — usually it means an unnecessary large dependency was added to a bundle.
Total feedback loop: Time from file save to the new bundle being available in the cache. This is detection + debounce + build. Our target is under 400ms for any single-bundle rebuild.
We review these metrics weekly. If the private bundle build time increases from 120ms to 180ms, we investigate. Did someone add a new dependency? Did a shared utility grow in complexity? Is there a barrel file importing the entire design system when only one component is needed?
Treating DX as a measurable metric prevents the slow death of productivity. Nobody notices when the build gets 10ms slower. They notice when it's 3 seconds and wonder how it got that bad.
XV – The Surprisingly Simple Core
After all this discussion of debouncing, selective rebuilds, cache invalidation, race conditions, and error handling, you might expect the watcher to be a complex beast.
It's about 300 lines.
That's it. Three hundred lines of TypeScript, including comments, error handling, and the logging system.
The debounce logic is about 30 lines. The directory-to-bundle mapping is about 40 lines. The rebuild pipeline is about 50 lines. The cache management is about 40 lines. The file system listener setup is about 30 lines. Error handling and logging account for the rest.
No external dependencies. No configuration files. No plugin system. Just Bun's native fs.watch, Bun.build(), and a few well-placed setTimeouts.
This is the argument for building your own DX tooling when your architecture is non-standard. A generic tool (Vite, webpack) would bring 100,000+ lines of code and hundreds of configuration options to solve a problem that our specific architecture solved in 300 lines.
The maintenance burden is minimal. We've touched the watcher code three times in six months — once to add rebuild mode, once to improve error display, and once to fix a macOS-specific quirk with .DS_Store file events triggering unnecessary rebuilds.
XVI – When to Build Your Own vs. When to Use a Framework
I don't want to create the impression that everyone should build their own file watcher. That would be irresponsible advice.
Build your own when:
- Your architecture doesn't fit the framework's assumptions
- The problem is small and well-defined (file watching + bundler invocation)
- The framework would be the most complex dependency in your dev stack
- Your team understands the underlying primitives (fs events, bundler APIs)
Use a framework when:
- Your architecture is standard (SPA, SSR, or static site)
- You need features that are genuinely complex (HMR, CSS modules, tree-shaking optimization)
- The framework's opinions align with your project's needs
- Your team doesn't want to maintain build tooling
For us, the decision was clear. Our three-bundle, server-selected architecture is non-standard. The problem was small. The Bun bundler API was the only primitive we needed. And the alternative (Vite) would have required restructuring our server to accommodate it.
For most projects, Vite is the right answer. For our project, 300 lines of custom code was.
XVII – The Bigger Lesson: DX Is Architecture
Here's the insight that emerged from building this system.
Developer experience isn't a layer you add on top of your architecture. It is part of your architecture. The file watcher works well because the application architecture — three explicit entry points, directory-based component ownership, Bun.build() as a function — was designed in a way that makes file watching straightforward.
If our component structure was chaotic, the directory-to-bundle heuristic wouldn't work. If our bundler required a child process, the feedback loop would be 200ms slower. If our caching strategy used filesystem-based caching instead of in-memory, invalidation would be complex.
Every architectural decision you make either helps or hurts your future DX tooling. Clear component ownership makes selective rebuilds possible. Bundler-as-a-function makes in-process rebuilding possible. In-memory caching makes instant invalidation possible.
When you're designing your application architecture, ask yourself: "How will I know when this file changes? How fast can I rebuild? How simply can I invalidate the old version?"
If the answers are clear, your DX tooling will be simple. If they're murky, you'll be fighting your architecture to build the tooling you need.
XVIII – The Question I Keep Coming Back To
Every framework promises a great developer experience. Vite is fast. Next.js has instant refresh. Turbopack is "700x faster." The benchmarks are impressive. The demos are slick.
But frameworks optimize for the general case. Your project is specific. Your architecture has constraints. Your team has preferences.
The most productive developer experience I've ever had isn't the fastest HMR or the prettiest error overlay. It's the one where I understand exactly what happens when I save a file. Where the feedback loop is predictable. Where errors are clear. Where the tooling serves the architecture instead of the other way around.
300 lines. Sub-400ms. Zero mysteries.
That's what building your own DX tooling buys you: understanding.
So here's the question:
When you save a file in your project, can you trace exactly what happens between your keystroke and the browser showing the result — and are you confident nothing is wasted?
If you can't, maybe it's worth finding out.
– Antonio