I Run 6 Production Services on a Single €30/Month Server. Here's Why I Don't Need Kubernetes.

My entire infrastructure costs less than a single AWS Lambda function at scale.
I run six production services on one machine. Two SaaS platforms processing real payments. An automation engine. An analytics stack with ClickHouse. An image processing service. A development environment.
Total monthly cost: around thirty dollars.
That's roughly one-sixth of what I'd pay on AWS. One-seventh of Vercel or Railway. And about one-sixtieth of what Kubernetes would cost by the time you factor in the managed control plane, the load balancers, and the three months of your life you'd lose configuring it.
This isn't a side project. This is production infrastructure that books real mentoring sessions and processes real credit purchases.
I – The Six Services
Let me paint the picture. One Hetzner server, four virtual CPU cores, eight gigabytes of RAM, 160 gigabytes of NVMe storage.
Running on it, each in its own Docker container, each with its own subdomain, all routed through a single Traefik reverse proxy managed by Coolify:
Vibe is the micro-SaaS marketplace. It handles credit purchases, AI tool execution, and creator payouts via Stripe Connect. Mentoring is the mentoring platform with per-session payments, scheduling, and real-time messaging. n8n is the workflow automation engine — email sequences, Slack notifications, data sync, scheduled tasks. It's the glue that connects everything.
Rybbit Analytics is a self-hosted analytics stack with a ClickHouse backend. It replaces Google Analytics across all my properties. Privacy-respecting, no cookie banners needed. Imago handles image processing — resizing, compression, format conversion, and caching for both platforms. Dev is a staging environment that mirrors production with different data.
Six services. Six subdomains. One server. One bill. One SSH connection to debug anything.
II – The Math Nobody on Cloud Twitter Wants to See
A Hetzner CPX31 costs about fourteen euros per month. Add a 100-gigabyte storage volume, DNS, and automated backups. You land around twenty-five dollars.
The equivalent on AWS — a t3.large instance, two RDS PostgreSQL micros, an application load balancer, EBS storage, data transfer, S3, and CloudWatch — runs about a hundred and sixty dollars per month. That's six times more expensive.
The equivalent on Vercel, Railway, or Render — six services at twenty dollars each base price, two managed databases, bandwidth overages — runs about a hundred and eighty dollars per month. That's seven times more.
But the real savings aren't in the server cost. They're in the things you don't need.
No NAT gateways at thirty-two dollars each. No managed load balancer at twenty-two dollars. No container orchestration service. No managed database with its mandatory minimum tier. No surprise bandwidth charges that show up three weeks after the bill closes.
One server. One predictable cost. And Hetzner gives you dedicated CPU cores, not burstable instances that throttle under load and charge you extra when they don't.
III – Coolify: Self-Hosted Heroku That Actually Works
Coolify is what makes this manageable for a single developer.
Without it, I'd be hand-editing docker-compose files, writing Traefik configurations from scratch, setting up Let's Encrypt with certbot cron jobs, building CI/CD pipelines with GitHub Actions, and spending weekends debugging deployment failures instead of building product.
Coolify handles all of it. Push to main, it builds and deploys. SSL certificates for every subdomain, auto-renewed. Routing rules generated from its UI. Container lifecycle management with health checks. Secure environment variable storage. Deployment previews on PR branches. Centralized logging.
The deployment flow is: push code to GitHub, Coolify webhook triggers a build, Docker multi-stage build runs on the VPS, new container starts and passes its health check, Traefik routes traffic to it, old container stops.
Zero-downtime deployments work because Traefik only routes to healthy containers. The new container starts, passes its check, and Traefik adds it to the pool before removing the old one. During the overlap, both handle requests. Users notice nothing.
Installing Coolify is one command. It sets up Docker, Traefik, and its management panel. Point your DNS, connect your repos, and deploy. Total setup time from zero to six running services is about four hours, most of which is writing Dockerfiles and configuring environment variables.
IV – Traefik: The Silent Hero
Traefik is the reverse proxy doing the real work behind the scenes. One instance handles HTTPS termination for all six services plus the Coolify panel, automatic Let's Encrypt certificate management, host-based routing to the right container for each subdomain, and middleware for rate limiting, security headers, and redirects.
The magic is in Docker labels. Coolify sets labels on each container that Traefik reads to configure routing. When a new container starts, Traefik detects the labels and updates its routing table automatically. No configuration files to edit. No Traefik restarts. No downtime.
Each service gets its own subdomain and its own routing rule. The image processing service gets cache headers middleware that sets aggressive browser caching. The dev environment gets basic auth middleware so it's password-protected. Everything else just routes cleanly to the right container on the right port.
This is the infrastructure pattern that cloud providers don't want you to know about. A single Traefik instance on a single server can handle thousands of requests per second across dozens of services. You don't need an AWS Application Load Balancer at twenty-two dollars a month to route HTTP traffic. You need Traefik, which is free and actually faster.
V – Fitting Six Services in Eight Gigabytes
Memory is the constraint. Not CPU.
Average utilization across all services, including both PostgreSQL instances, ClickHouse, and Traefik itself, sits around 28% of RAM and 31% of CPU. Even at peak, when all services spike simultaneously, total usage hits about half the available resources.
That headroom is deliberate. It handles traffic spikes without degradation. It gives room for Docker builds, which temporarily spike CPU. And it means I'm nowhere near the point where I'd need to upgrade.
ClickHouse is the memory hog. It keeps active datasets in memory for fast analytical queries. I've configured it with hard memory limits — about 400 megabytes per query and a 15% cap on total server memory usage.
Imago has the most variable CPU. Image processing is intensive but bursty. A batch of resizing operations might peg a core for ten seconds, then idle for minutes. Four CPU cores absorb these bursts without other services noticing.
The two PostgreSQL instances use Alpine images and consume about fifty megabytes each at idle. Separate instances — not a shared database — because I want complete isolation. A runaway query in Vibe can't starve Mentoring's connections. Independent backups. Independent upgrade schedules. The overhead of running two instances instead of one is about forty megabytes. A rounding error on eight gigs.
VI – The Backup Strategy That Lets Me Sleep
Running everything on one server means a server failure is a total failure. This is the objection everyone raises. And they're right to raise it.
The answer is defense in depth.
Layer one is Hetzner weekly snapshots. Full VPS image backup. Restore the entire server in under five minutes. Costs about three euros a month.
Layer two is daily database dumps. Each PostgreSQL instance gets dumped, compressed, and uploaded to Hetzner Object Storage with thirty-day retention. Automated by an n8n workflow that runs at 3 AM.
Layer three is application data exports. n8n workflows exported as JSON. ClickHouse data exported. Imago's cache is ephemeral and intentionally not backed up.
Layer four is Git. All application code and all Dockerfiles live in GitHub. Infrastructure can be rebuilt from scratch.
Disaster recovery time from bare metal: under forty minutes. Provision a new server, restore from snapshot, verify services. Or rebuild from scratch: install Coolify, configure six services, restore databases. Either path is faster than most teams can recover from an AWS region outage.
VII – Monitoring Without the Observability Tax
Running on one server simplifies monitoring dramatically. No distributed traces to correlate. No cross-region latency to track. Just one machine's vital signs.
Every service exposes a health endpoint that checks database connectivity and reports memory usage and uptime. Coolify's built-in dashboard shows CPU, RAM, disk, and network per container. Rybbit tracks user-facing metrics — page loads, API response times, errors.
The critical layer is an n8n workflow that pings every service every five minutes. If any health check fails, it sends a Slack alert. If the service is still down two minutes later, it escalates. Simple, reliable, and costs nothing beyond the n8n instance I'm already running.
I don't need Datadog. I don't need New Relic. I don't need a fifty-dollar-per-month observability platform to tell me that one of my six containers is unhealthy. I need a cron job and a Slack webhook.
VIII – The Limits (Honest Assessment)
I'm not going to pretend this scales infinitely. Here's where the walls are.
Memory ceiling hits when all services peak simultaneously and total usage exceeds about 85% — roughly seven gigabytes. The fix is upgrading to the next Hetzner tier, which doubles RAM for about ten euros more per month. That buys another year of growth.
CPU ceiling hits during sustained heavy image processing or AI inference. The fix is offloading compute-heavy work to a temporary worker VPS that spins up on demand.
Database size becomes a concern past fifty gigabytes per instance. Backups get slow, dumps get large. The fix is archiving old data or splitting to a dedicated database server.
The honest answer: this setup handles ten thousand daily active users across all services comfortably. If any single service needs to handle fifty thousand or more, it's time to split that service to its own server. But that's a great problem to have — and the migration is straightforward since each service is already an independent Docker container.
IX – The Anti-Kubernetes Argument
"But what about high availability?"
My users don't notice thirty seconds of downtime during deployments. They don't need five nines. They need the service to work when they use it, which is 99.9% of the time.
"But what about auto-scaling?"
My traffic is predictable. There's no Black Friday spike. There's no viral TikTok moment. If there were, I'd upgrade the server in five minutes or add a second one behind a load balancer.
"But what about service mesh?"
My services communicate via HTTP on a Docker bridge network. Latency is microseconds. There's no need for Istio. There's no need for circuit breakers between containers on the same machine.
Kubernetes is a solution for problems I don't have. The operational overhead of running k8s — or paying seventy-two dollars a month for managed EKS — far exceeds the benefit when you have six services doing under a thousand requests per second combined.
The right time to migrate to Kubernetes: when your team grows past five to ten engineers who need independent deployment pipelines, or when you need to scale individual services across multiple machines. Not when you read a blog post about it.
Running Your Own Infrastructure? I Can Help You Architect It.
I've helped developers consolidate from three-hundred-dollar-per-month cloud bills to thirty-dollar single-server setups — without losing reliability.
Whether you're launching your first SaaS or migrating off an expensive PaaS, the architecture decisions you make in week one compound for years.
Book a session at mentoring.oakoliver.com and let's design your infrastructure together. Or see this exact stack in production at oakoliver.com.
X – The Philosophy
You don't need a cloud provider to run a cloud business.
You need a server. A reverse proxy. The discipline to automate your deployments and back up your data.
Everything else is someone selling you complexity you don't need yet.
I run a mentoring platform, a micro-SaaS marketplace, an automation engine, an analytics stack, an image service, and a dev environment. All on one machine. All for the price of two fancy coffees per week.
The server has been running for months. Uptime is north of 99.9%. I deploy multiple times a day without thinking about it. And my infrastructure bill is a line item I genuinely forget about because it's smaller than my Spotify subscription.
What would you build if infrastructure costs were no longer a constraint?
– Antonio