We Built a 340-Line Static Site Generator That Made Next.js Irrelevant. It Renders 480 Pages in 650ms.

Next.js is a 300,000-line framework.
Our static site generator is 340 lines of TypeScript.
It generates 480 static HTML pages across 40 languages, hydrates them with React Query, and serves them from memory with sub-millisecond response times. It runs every time the server starts. No separate build step. No CI pipeline for static generation. No incremental static regeneration complexity.
We didn't set out to replace Next.js. We set out to solve a specific problem, and Next.js was standing in the way.
I – The Problem That Started Everything
mentoring.oakoliver.com is a mentoring platform. Mentors create profiles. Mentees browse, book sessions, and pay per minute through real-time billing. The platform supports 40 locales because mentors and mentees come from everywhere.
Here's what "40 locales" actually means when you do the math.
Twelve public routes: home, mentors listing, mentor detail, about, FAQ, pricing, terms, privacy, contact, blog, blog post, and 404. Each route rendered in 40 languages.
40 times 12 equals 480 unique HTML pages.
Each page needs fully rendered HTML for SEO — search engines must see real content, not a loading spinner. Each needs proper language tags and hreflang declarations for international search. Each needs pre-loaded data embedded directly in the HTML. And each needs React hydration so the page becomes interactive after the initial paint.
We started with Next.js 14's App Router. It seemed like the obvious choice.
It wasn't.
II – Why Next.js Failed Us
The deployment didn't fit. We deploy on a single Hetzner VPS using Coolify and Traefik. Next.js assumes you're deploying to Vercel, or at minimum, you have a dedicated Node.js process running the framework's server. We already had an Elysia.js server handling our API, WebSocket connections for real-time billing, and authentication. Running a second server process just for page serving was wasteful on our resource-constrained machine.
The build time was punishing. With 480 pages, the Next.js build took 47 seconds in staging. That's 47 seconds of CI time on every deploy, plus the complexity of managing the build cache, plus the memory overhead of the build process itself. And our mentor data changes constantly — new mentors sign up, existing mentors update their profiles. Incremental static regeneration would have meant configuring revalidation intervals, handling on-demand revalidation through API routes, and debugging stale pages served to Google's crawler.
We wanted something simpler. Regenerate everything on deploy. Serve from memory. Done.
React Server Components added complexity we didn't need. Next.js 14 pushes you toward RSCs. They're interesting technology, but they introduce a new mental model, a new wire format, and a dependency on the framework's server runtime. We wanted standard React 19 client components with server-side string rendering. RSCs were solving problems we didn't have.
Bundle control was opaque. Next.js controls your bundling. You can configure their bundler, but you can't choose your own approach. We needed exact control over three separate SPA bundles — public, auth, and private — with different dependency trees and different loading strategies. Next.js's automatic code splitting is a black box. We wanted a glass box.
III – The Insight That Changed Everything
Here's the key architectural insight that unlocked the entire solution.
Static site generation doesn't need to be a separate build step. It can be a phase of server startup.
When our Elysia.js server boots, it loads all translation files for 40 locales. It fetches current data from PostgreSQL — mentor listings, FAQ content, testimonials, configuration. It renders 480 HTML pages using React 19's server-side rendering. It stores them in an in-memory map. And it serves them directly from that map on incoming requests.
The entire process takes 650 milliseconds.
Let me break that down.
Database queries take about 45 milliseconds — four parallel queries through Prisma fetching mentors, FAQs, testimonials, and site configuration. Translation loading takes 12 milliseconds — 40 JSON files read in parallel through Bun's native file API.
The actual React rendering — 480 full HTML pages with layouts, components, metadata, and hreflang tags — takes 520 milliseconds. That's roughly 1.08 milliseconds per page.
Map insertion and string operations account for the remaining 73 milliseconds.
Compared to Next.js, this is a 72x speedup. Not a typo. Not a synthetic benchmark. Same content, same pages, same locale count. Next.js took 47 seconds. We take 650 milliseconds.
IV – Why 72x Faster Isn't Magic
The speedup sounds absurd, but the reasons are straightforward.
No framework overhead. Next.js runs its entire build pipeline on every generation: route analysis, code splitting, chunk optimization, manifest generation, image optimization. We skip all of that because we don't need any of it. Our routes are known at startup. Our code splitting is handled separately. Our images are on Cloudflare.
No filesystem writes. Next.js writes each page to disk as individual HTML files. We store them in memory. Disk I/O is expensive. Memory access is essentially free.
Shared data across all pages. We load data once and render 480 pages from it. We make exactly four database queries for 480 pages, not 480 separate data fetches. When you control the generation process, you control the data loading strategy.
Bun's rendering speed. React 19's server-side string rendering on Bun is significantly faster than on Node.js due to Bun's optimized string handling. At 1.08 milliseconds per page, the rendering itself is almost trivially cheap.
V – The Hydration Bridge
This is the most elegant part of the architecture, and it's what makes the SSG feel seamless to the user.
When the server renders a page, it embeds the data used for rendering directly into the HTML as a script tag. On the client side, the React entry point reads this embedded data and pre-populates the React Query cache before hydration begins.
React Query doesn't know or care where its initial cache data came from. Whether it came from an API call or was pre-populated from embedded page data, the component just calls its query hook and gets data immediately. No loading state on first render. No layout shift. No flash of empty content.
This creates a beautiful dual-mode behavior. On initial page load, data is already in the cache from the SSG embedding — so the component renders instantly with real content. On subsequent client-side navigation, the same component fetches fresh data from the API and shows a loading skeleton while it arrives.
The same React components work for both SSG pages and client-side navigation. No special server components. No framework magic. No conditional rendering logic. Just a data layer that's smart about where its initial state comes from.
VI – How Bundle Selection Works
One detail ties the SSG to the broader architecture: the HTML template references the correct JavaScript bundle based on the route type.
Public SSG pages reference the public bundle — about 400 kilobytes. This contains React, React Router, React Query, and the public page components. It does not contain authentication logic, dashboard components, session billing, or admin features.
A user browsing the public mentoring site downloads 400 kilobytes of JavaScript. Not the 1.5 megabytes that the full private dashboard requires. The SSG HTML renders instantly, the small bundle loads in parallel, and hydration makes it interactive — typically within 300 milliseconds on a decent connection.
The server makes the bundle decision, not the client. Based on the URL path in the incoming request, the server knows exactly which bundle to reference in the HTML template. Public routes get the public bundle. Auth routes get the auth bundle. Private routes get the private bundle.
Three bundles. One codebase. Zero framework involvement.
Building something with complex rendering requirements? Want to talk through whether a framework serves you or constrains you? I work through exactly these architectural decisions in mentoring sessions at mentoring.oakoliver.com. Or explore the technical portfolio at oakoliver.com.
VII – Cache Invalidation, the Simple Way
Our SSG pages regenerate on every server restart. When we deploy new code or update content, the server restarts, and all 480 pages rebuild with fresh data. This takes 650 milliseconds — less than a second, and Traefik's health check ensures zero-downtime deploys anyway.
But what about content changes that don't require a code deploy? New mentor signs up. FAQ gets updated.
For this, we have a single admin endpoint. Hit it with the right credentials, and all 480 pages regenerate in 650 milliseconds. The new pages are stored in the map and being served before the API response even reaches the caller.
Compare this to Next.js ISR. Revalidation intervals to configure. Stale-while-revalidate strategies to hope work correctly. On-demand revalidation API routes to wire up. Cache purging to debug when pages don't update.
Our approach is brute-force simple. Regenerate everything. Always. When it takes 650 milliseconds, there's no reason to be clever about partial invalidation.
VIII – International SEO at Scale
For a 40-locale site, proper hreflang tags are essential. Each page needs to tell search engines about all 40 of its language variants plus a default.
That's 41 link tags per page. Across 480 pages, that's 19,680 hreflang declarations, all generated in the string operations phase.
With Next.js, we had to use a third-party internationalization plugin and configure hreflang generation separately. With our custom SSG, it's a string concatenation loop that runs during the rendering phase.
Language negotiation is equally straightforward. When a user hits the root URL without a locale prefix, the server parses their Accept-Language header, finds the best match among our 40 supported locales, and redirects with a 302. The redirect target is an SSG page that serves instantly from memory.
Total time from request to rendered HTML: under two milliseconds.
IX – What We Lost
Let's be honest about what our custom SSG doesn't have.
No image optimization — Next.js's Image component is genuinely excellent. We use Cloudflare's image resizing instead.
No automatic code splitting per route — we do it manually via three bundles. More work, but more control.
No edge runtime — we don't need it. Single VPS.
No incremental static regeneration — we regenerate everything. At 650 milliseconds for 480 pages, ISR solves a problem we don't have.
No React Server Components — we use client components with SSG hydration. Different trade-off, simpler mental model.
No massive community and ecosystem — Next.js has thousands of plugins, examples, and tutorials. Our custom SSG has one developer who understands it completely.
That last point is actually an advantage. When something breaks, there's no framework to debug. No middleware chain to trace. No cache layer to interrogate. It's our code, doing exactly what we told it to do, and nothing else.
X – When NOT to Do This
I want to be clear. This approach is not universally better than Next.js. It's better for our specific situation.
Single VPS deployment — no edge, no serverless. Known number of routes — not a CMS with unbounded pages. Small team — one developer who owns the entire stack. Performance-critical — competing with platforms that have ten times our budget. Full control required — custom billing, real-time event streaming, specific bundle splits.
If you're building a marketing site with 20 pages and deploying to Vercel, use Next.js. It's excellent for that.
If you're building a content-heavy site with thousands of pages and a content team, use Next.js with ISR. It handles that beautifully.
But if you're building a full-stack application where the static pages are just one layer of a larger system, and you want to control every byte that reaches the user, and you're willing to own the complexity — 340 lines of TypeScript can replace an entire framework.
The breakpoint isn't about capability. It's about ownership. Frameworks trade your control for their convenience. Sometimes that trade is worth it. Sometimes it isn't.
What's the most complex framework feature you rely on? Could you rebuild it in 340 lines?
I'm genuinely curious where the breakpoint falls for you.
– Antonio