We Replaced Node.js With Bun Across Every Production Service. Here's What Nobody Tells You.

We didn't migrate one service. We migrated all of them.
A portfolio site. A mentoring platform serving 40 locales. A micro-SaaS hosting 99 AI-powered apps. All running on a single Hetzner VPS. All previously on Node.js 20.
All replaced with Bun in three weeks.
This isn't a synthetic benchmark post where someone runs a fibonacci function and declares a winner. This is what happens when you actually ship Bun to production, stare at your metrics dashboard for a month, and take notes on every single thing that surprised you.
Here's what surprised me most: the reason we switched had almost nothing to do with speed.
I – The Real Reason We Switched
Let me disappoint you early.
We didn't switch because Bun is three times faster at HTTP requests. We switched because of something far more mundane — developer experience compounding over time.
At Oak Oliver, we run three production services on a single server costing less than forty euros a month. Each service uses Elysia.js. Each service uses TypeScript. And each service was drowning in configuration files that made me question my career choices.
Every Node.js service needed a separate TypeScript config, a package lock file, an npm config, a nodemon config. Five config files just to exist.
With Bun, that collapsed to two. A TypeScript config and a package manifest. That's it.
Bun runs TypeScript natively. No transpilation step. No watcher daemon restarting your process. No intermediary tool chain between you and your code. You point Bun at a TypeScript file and it just runs.
But here's the moment I knew we'd never go back.
Package installation on the mentoring platform — a real application with real dependencies — took 1.2 seconds with Bun. The identical operation with npm took 38 seconds.
That isn't a typo.
When you deploy four or five times a day across three services, those seconds compound into minutes. Those minutes compound into a fundamentally different relationship with shipping code.
You stop hesitating before deploying. You stop batching changes. You ship one fix, verify, ship the next. The tooling gets out of your way, and you move faster without trying.
II – The Only Benchmark That Matters
Everyone benchmarks HTTP throughput. Nobody benchmarks what actually matters for real applications.
Cold start to first request served.
Here's why this matters in our setup. Our services run behind Traefik in Docker containers managed by Coolify. When we deploy, the old container stays up until the new one passes a health check. The faster the new container boots, the shorter the window where anything can go wrong.
The portfolio site went from 1,840 milliseconds on Node.js to 310 on Bun. Nearly six times faster.
The mentoring platform — which has to compile 480 static HTML pages across 40 locales, load Prisma, initialize caches, and set up event streaming channels — dropped from 4.2 seconds to under 900 milliseconds. A 4.7x improvement on the most complex service.
The Vibe micro-SaaS platform dropped from 3.1 seconds to 620 milliseconds, despite loading a 13,870-line monolith. Bun's parser chews through large files like they're nothing.
Raw throughput? Sure, those numbers are dramatic too. We saw nearly three times more requests per second and four times better tail latency under load. But honestly, we don't serve 142,000 requests per second. We serve maybe 500.
What matters for us is P99 latency and memory footprint — because three services share one VPS, and every millisecond and every megabyte counts.
III – Eight Dependencies We Deleted
This is where Bun stops feeling like a faster Node.js and starts feeling like a different paradigm entirely.
Bun ships native APIs that replace entire npm packages. Not wrappers. Not polyfills. Native implementations baked into the runtime.
We deleted our build tooling dependency because Bun has a built-in bundler. The mentoring platform compiles three separate SPA bundles at startup. What previously required an external bundler library with fifteen lines of configuration became a single native function call. Build times for all three bundles dropped from 1.7 seconds to 650 milliseconds.
We deleted our password hashing library — the one that required native addon compilation and broke during Docker builds on ARM. Bun ships password hashing natively, including Argon2id, which is the current gold standard. No native addon. No compilation step. No ARM build failures.
We deleted our SQLite library. The Vibe platform uses per-user SQLite databases, and the previous library required native compilation on every install. Bun ships SQLite as a built-in module with an almost identical API. Migration was trivial. Performance improved by 40 to 60 percent on read-heavy workloads because Bun avoids the overhead of crossing back and forth between JavaScript and native code on every database call.
We deleted our file system utility library. Bun's native file API is lazy — it doesn't read a file until you explicitly ask for its contents. This matters for our SSG pipeline where we load dozens of templates but might not need all of them.
We deleted the TypeScript execution tool, the environment variable loader, the fetch polyfill, and the WebSocket library. All replaced by things Bun ships out of the box.
In total, we removed eight dependencies representing over 235 million weekly npm downloads combined. Not because those packages are bad — they're excellent. But because Bun ships their functionality as first-class features of the runtime itself.
Fewer dependencies means fewer things that can break. Fewer things to audit. Fewer supply chain risks. Fewer reasons to wake up at 3 AM.
IV – What Broke
Now for the part that makes this actually useful. Here's everything that went wrong, because I guarantee you'll hit at least one of these.
The virtual machine module bit us first. Our mentoring platform used a sandboxed execution context for Markdown rendering. Bun's implementation has subtle differences in how it handles prototype chains inside those contexts. The fix was simple — we didn't actually need the sandbox since we controlled the input — but it took two hours of debugging production errors to figure out what was happening.
If you rely heavily on sandboxed execution or worker threads, test thoroughly before you migrate.
Workspace resolution tripped us up next. Bun's monorepo workspace implementation is mostly compatible, but the resolution algorithm differs at the edges. We had a shared UI package that resolved correctly under npm but sent Bun looking in the wrong directory. The fix required explicit path configuration in Bun's settings file.
Hot reloading confused us for a full day. Bun offers two modes — watch mode and hot mode — and they behave differently than you'd expect. Watch mode restarts the entire process on file changes, like nodemon. Hot mode does module replacement without restarting, which sounds ideal until you realize that side effects at the module level don't re-execute. Your route registrations don't update. Your server config doesn't refresh.
We wasted a day wondering why our code changes weren't taking effect before we realized we needed watch mode, not hot mode.
Docker images got larger. This one surprised us. The official Bun base image is about 50 megabytes larger than the slim Node.js image. We fixed it with a multi-stage build using a distroless final image, which brought the total down to 94 megabytes — smaller than either default.
Prisma needed extra attention. The mentoring platform uses Prisma for PostgreSQL. Prisma's query engine is a Rust binary downloaded during installation, and the combination of Bun plus ARM64 plus Docker meant we had to explicitly specify the correct binary target. Not Bun-specific exactly, but the intersection of new runtime, non-x86 architecture, and containerization required triple-checking everything.
V – The Silent Win: Memory
This deserves its own section because it's the single most impactful production metric we observed.
Our VPS has 8 gigabytes of RAM shared across three application services plus PostgreSQL, Redis, and Traefik. Before the migration, memory pressure was a genuine operational concern.
After the migration, total memory usage across all three services dropped from 467 megabytes to 250 megabytes.
We freed up 217 megabytes of RAM.
On a server running everything on a single machine, that's the difference between comfortable headroom and swap-thrashing during traffic spikes. The mentoring platform alone dropped from 210 to 118 megabytes, partly because the built-in SQLite module is more memory-efficient, and partly because Bun's garbage collector is more aggressive with short-lived objects — which our SSG pipeline generates thousands of during the 480-page build.
VI – The Migration Playbook
If you're considering this, here's the exact sequence we followed.
Start with your test suite. Don't change anything else. Just run your existing tests on Bun instead of your current test runner. Fix what breaks. This tells you immediately how compatible your codebase is.
Replace your package manager next. Switch from npm install to Bun install. Commit the new lock file. Remove the old one. This is the lowest-risk change and gives you the fastest feedback loop improvement.
Replace TypeScript execution third. Change your start script from using a TypeScript runner to pointing Bun directly at your entry file. No more transpilation layer.
Replace Node.js APIs with Bun native APIs one at a time. Start with file operations since they're the most compatible drop-in replacement. Then move to SQLite, password hashing, and so on.
Remove eliminated dependencies fifth. Only after confirming everything works.
Update Docker images last. This affects your deployment pipeline, so save it for when you're confident everything else is solid.
We did this per-service, starting with the portfolio site — the simplest — then Vibe, then the mentoring platform. Each migration was a separate pull request with its own staging test.
VII – One Month Later
It's been a month since the last service migrated. Here's the honest verdict.
Cold starts are five times faster across the board. We're using 217 megabytes less memory on our VPS. We maintain eight fewer dependencies. Package installs feel instant. Native TypeScript execution eliminated an entire category of configuration headaches.
One tool is now our bundler, our runtime, our package manager, and our test runner.
What hasn't changed: Elysia.js performs the same because it was already optimized for Bun. The developer workflow is identical once you're past the migration. And reliability has been flawless — zero runtime crashes attributable to Bun in thirty days.
What's worse: occasional compatibility issues with packages that rely on obscure Node.js internals. Stack traces that are sometimes less detailed. A smaller community, so you'll find fewer answers on Stack Overflow — though the Discord is genuinely excellent.
VIII – Should You Switch?
If you're starting a new project, yes. The developer experience advantage alone justifies it. Native TypeScript, fast installs, built-in bundler, built-in test runner, built-in SQLite — it's a batteries-included runtime that actually delivers on that promise.
If you're migrating an existing production system, yes, but methodically. Follow the playbook above. Budget two weeks for a medium-complexity service. Test on staging before you touch production.
If you're running on a framework that deeply integrates with Node.js internals — certain Next.js features, for instance — wait. Compatibility is improving every release, but edge cases remain.
If you're building production systems and want to talk through architectural decisions like this — runtime migrations, infrastructure trade-offs, when to follow the crowd versus when to chart your own path — that's exactly what I do in my mentoring sessions at mentoring.oakoliver.com. Over 100 sessions with engineers who ship real software.
IX – The Uncomfortable Truth
For us, the switch was unambiguously positive. Three production services on a single VPS, and the combined effect of faster starts, lower memory, fewer dependencies, and a unified toolchain has made everything simpler.
Not just faster. Simpler.
And in production, simple is the ultimate performance optimization.
We didn't chase benchmarks. We chased fewer moving parts. Fewer config files. Fewer reasons for a deploy to fail. Fewer layers between our code and the metal.
The benchmarks just happened to come along for the ride.
What's your relationship with your runtime? Is it a tool that gets out of your way, or is it a thing you work around?
– Antonio