We Use Subdomains as Security Boundaries. No Kubernetes. No Containers. Just Directories.

Every SaaS company eventually faces the isolation question.
How do you give each customer their own space without giving each customer their own server?
Kubernetes solves this with namespaces and pods. AWS Lambda solves it with per-invocation isolation. Cloudflare Workers solve it with V8 isolates. All of these cost money we don't have and add complexity we don't want.
We solved it with subdomains.
Not as routing. As identity.
I – The Subdomain Is the Principal
vibe.oakoliver.com is a micro-SaaS platform where anyone can create AI-powered micro-apps in 30 seconds. You describe what you want, an LLM generates the code, and your app goes live at your-app-name.vibe.oakoliver.com.
We host 99 of these micro-apps on a single Elysia.js process. On a single server. And they're isolated from each other as thoroughly as if they were on separate machines.
The core concept is deceptively simple. When a request arrives, the very first thing the server does is extract the subdomain from the hostname. That subdomain becomes the principal — the identity that determines what data, files, secrets, and resources the request can access.
Everything downstream is scoped to this principal.
The subdomain maps to a filesystem path. That path contains the app's database, its generated code, its uploads, its billing ledger. If the path doesn't exist, the app doesn't exist.
No database query determines what data to show. No middleware checks a tenant ID column. The subdomain maps to a directory, and the directory contains everything.
II – The Threat Model
When user A creates a micro-app, we need to guarantee five things.
Data isolation. User A cannot read user B's data, API keys, or generated code.
Resource isolation. User A's app cannot consume resources that starve user B.
Filesystem isolation. User A's uploaded assets, generated code, and databases are physically separated from everyone else's.
Secret isolation. API keys, encryption keys, and session tokens are scoped exclusively to user A.
Billing isolation. User A's credit balance and transaction history are completely independent.
The traditional approaches each involve trade-offs. Separate VPS per user gives you perfect isolation but costs five to twenty dollars per user per month — unsustainable for a micro-SaaS. Kubernetes pods give you strong isolation but require cluster management overhead. Docker containers are lighter but still need orchestration. Database row-level security is the cheapest but weakest — a single code bug can expose data across tenants.
The subdomain-as-principal pattern gives us strong isolation with minimal per-user cost and no infrastructure dependencies beyond a single process and a filesystem.
III – From Hostname to Identity
The resolution layer is the foundation of everything. On every incoming request, the server extracts the subdomain, validates its format, checks whether the corresponding directory exists, and derives a unique secret key.
This takes about 0.02 milliseconds. Essentially free.
The validation is strict. Slugs must be alphanumeric with hyphens, between three and sixty-three characters. Reserved words like "www," "api," "admin," and "dashboard" are blocked. If the format is invalid or the subdomain is reserved, the request is rejected before anything else happens.
But the critical security property is in the secret derivation.
We don't store secrets in a database. We derive them. Each app's secret key is computed from a master key combined with the app's slug, using cryptographic hashing. This means every app has a unique, deterministic secret. The secret can't be guessed without the master key. We never need to look up secrets — just compute them. And rotating the master key rotates all app secrets simultaneously.
There's no secrets table to exfiltrate. There's no single point of catastrophic failure. If someone compromises one app's derived key, they learn nothing about any other app's key.
This is essentially a simplified version of HMAC-based Key Derivation, which is the standard approach for deriving multiple keys from a single master. We extend it further by deriving purpose-specific keys — one for encryption, one for sessions, one for webhooks, one for the app's API access. The encryption key can't be used as a session key because they're derived with different purpose strings.
IV – Each App Gets Its Own World
When a user creates a new micro-app, the server provisions a directory structure. An app database for key-value storage and metadata. A billing database for credits, transactions, and holds. Directories for generated code, uploads, and logs. A manifest file recording the owner and creation timestamp.
The result on disk is beautifully simple. Ninety-nine directories, each self-contained. Each one a complete, independent application.
To back up an app, archive the directory. To delete an app, remove the directory. To migrate an app to another server, copy the directory. There's no shared database to query, no foreign keys to manage, no cascade deletes to worry about.
This is the operational beauty of the pattern. The unit of isolation is the unit of operations. Whatever you want to do to a single tenant — backup, restore, debug, delete, migrate — is a filesystem operation on a single directory.
V – Per-App Databases, Not Shared Tables
Each app gets two SQLite databases. One for application data — key-value storage, metadata, configuration. Another for billing — credit balance, transactions, holds.
Why two? Different access patterns and different durability requirements. The billing database runs in strict synchronous mode because losing financial transactions is unacceptable. The app database runs in normal synchronous mode for better performance, since key-value data can be regenerated.
Connection management matters with 99 potential apps. We use lazy initialization — connections only open when an app actually receives a request — combined with an eviction policy that closes connections idle for more than ten minutes.
At any given time, we typically have 15 to 25 active connections. The 99 apps don't all receive traffic simultaneously.
But here's the point that matters most: there is zero contention between users. User A's database operations never block user B's. A bug that corrupts one app's database file doesn't touch anyone else. A query on one user's data scans only that user's rows, not a shared table growing with every tenant.
VI – The Request Lifecycle
Let me trace a complete request through the system so you can see how everything connects.
A user's browser sends a request to their app's subdomain. Traefik receives it on port 443, terminates TLS using a wildcard certificate, and forwards it to the Elysia.js server.
The principal resolution middleware fires first. It extracts the subdomain from the hostname, validates the format, confirms the directory exists, and derives the app's secret key. All of this attaches to the request context.
Authentication middleware runs next, verifying the request's credentials against the principal's derived secret.
The route handler executes the actual business logic — a key-value read, an AI generation request, whatever. It gets its database connection through the principal's slug, queries only that app's data.
The billing middleware fires after the handler, debiting the appropriate number of credits from that app's bank database.
The entire request, from Traefik to response, takes two to four milliseconds for a typical key-value read. The principal resolution adds 0.02 milliseconds. The database query adds about 0.5 milliseconds. The rest is serialization and network overhead.
Building a multi-tenant platform? Wrestling with isolation trade-offs? I help engineers work through exactly these architectural decisions. Book a session at mentoring.oakoliver.com, or explore the Vibe platform itself at vibe.oakoliver.com.
VII – TLS: Solved Once, Forever
The subdomain pattern has a hidden advantage that nobody talks about: TLS is a solved problem for unlimited tenants.
Traefik manages a single wildcard certificate covering all 99 subdomains. One certificate issued by Let's Encrypt via DNS challenge through Cloudflare's API. Automatic renewal. No per-app certificate management. No certificate rotation per tenant.
Add a new app? It's instantly covered by the wildcard. Delete an app? The certificate doesn't care. Scale to 500 apps? Same single certificate.
This eliminates an entire category of infrastructure complexity that other multi-tenancy approaches have to solve per-tenant.
VIII – Resource Limits Without Containers
Without containers or virtual machines, resource isolation is the hardest part. We can't use kernel-level controls. Instead, we implement application-level throttling.
Each app gets a request rate limit — a thousand requests per minute. AI call limits — fifty per minute. Storage limits — 100 megabytes. Key-value entry limits — ten thousand entries. Execution timeout — five seconds per request.
For AI-generated apps that execute server-side logic, we use abort controllers with hard timeouts. If an app's handler takes more than five seconds, it's terminated with a 504 response.
Is this as strong as container isolation? No. A malicious app could theoretically consume excessive CPU before the timeout fires, or exploit a runtime vulnerability to escape the process.
But our threat model isn't adversarial. These are micro-apps created by paying users who authenticated with email. We're protecting against bugs and mistakes, not nation-state attackers. The application-level limits handle that threat model well.
IX – When to Graduate
Let's be honest about the scaling limits.
At around 500 apps, we'd hit the first real wall. Too many potential SQLite connections, even with lazy loading. The filesystem starts having too many directories for quick enumeration.
At around 5,000 total requests per second, the single Elysia.js process would saturate one CPU core.
When those limits arrive, the graduation path is clear.
Horizontal split by subdomain range — apps A through M on server one, N through Z on server two. DNS-based routing. Zero code changes.
Multiple processes behind a load balancer — with apps pinned to processes by consistent hashing. The directory structure makes this seamless.
Graduate individual high-traffic apps to their own containers. The self-contained directory makes this trivial — move the directory and update DNS.
But for 99 apps on a single server? We're running at about 20 percent of theoretical capacity. We have room to five-x before scaling even enters the conversation.
X – The Pattern, Generalized
The subdomain-as-principal pattern isn't specific to our stack. Here's the distilled version.
Use the subdomain as the identity anchor. Every downstream system derives its scope from this single identifier.
Map identity to filesystem. Each tenant gets a directory. The directory contains everything the tenant owns.
Derive secrets, don't store them. Use cryptographic key derivation from a master secret plus the tenant identifier. No secrets table. No single point of compromise.
Use per-tenant databases, not shared tables. SQLite makes this practical. Each tenant's data is physically separated.
Let DNS and TLS infrastructure handle the networking. Wildcard certificates plus reverse proxy equals unlimited subdomains with zero per-tenant networking configuration.
Implement resource limits at the application layer. Rate limiting, storage quotas, execution timeouts.
This pattern works for any multi-tenant system where tenants number in the hundreds, you want strong isolation without container overhead, you value operational simplicity over theoretical scalability, and each tenant's data is relatively small.
The directory is the app. The app is the directory. Sometimes the simplest abstraction is the best one.
If you're building multi-tenant software and debating between database-level isolation and infrastructure-level isolation, consider that there's a third path: filesystem-level isolation with derived identity.
What would your architecture look like if your tenant boundary was a directory?
– Antonio