BACK TO ENGINEERING
Architecture 20 min read

HMAC-Signed Vibe-to-Vibe Calls: How We Stopped Micro-Apps from Impersonating Each Other

Article Hero

Here's the nightmare scenario.

You build a platform where users create micro-apps. Each app gets its own subdomain. Each app can call your platform API. Everything works.

Then one day, a clever user figures out how to make their app send requests that look like they're coming from another app. They steal data. They spend someone else's credits. They impersonate the platform itself.

And you didn't even know it was possible until it happened.

This is the foundational security problem of any platform that hosts user-generated applications. When apps can talk to each other — or to the platform — you need a way to prove that every request is authentic, untampered, and authorized.

We solved this with HMAC-signed requests. And the story of how we got there is more instructive than the solution itself.


I – The Architecture That Created the Problem

Let me set the stage.

vibe.oakoliver.com is a micro-SaaS platform. Users describe an AI-powered app they want, and the platform generates it. Each app gets its own subdomain — like recipe-helper.vibe.oakoliver.com or budget-tracker.vibe.oakoliver.com.

Each app runs in its own Cloudflare Worker sandbox. It has its own per-user SQLite database, its own filesystem scope, and its own encryption keys derived from the app's slug.

But apps aren't islands.

An app might need to look up the current user's credit balance from the platform. An app might want to trigger a payment flow managed by another app. An app might need to verify that a request it received actually came from a legitimate source.

These inter-app and app-to-platform communications happen over HTTP. And HTTP requests can be forged by anyone who knows the URL.


II – The Threat Model: What We're Actually Defending Against

Before you design a security system, you need to define what you're protecting against. Not vague "hackers" — specific attack vectors.

Attack 1: App impersonation. App A sends a request to the platform claiming to be App B. If the platform believes it, App A can read App B's data, spend App B's credits, or modify App B's configuration.

Attack 2: Platform impersonation. A malicious external service sends a request to App A claiming to be the Vibe platform. If App A believes it, the attacker can inject fake data, trigger unauthorized actions, or extract sensitive information.

Attack 3: Request tampering. A legitimate request from App A to the platform is intercepted in transit. The attacker modifies the request body — changing the amount of a credit transaction, for instance — and forwards it. The platform processes the tampered request as if it were legitimate.

Attack 4: Replay attacks. An attacker captures a legitimate signed request and replays it later. If the platform accepts it, the attacker can repeat actions — like credit transfers — indefinitely.

Every one of these attacks is trivial to execute if your inter-app communication is just regular HTTP with API keys. API keys prove identity, but they don't prove integrity. And they're completely vulnerable to replay.

HMAC signing solves all four attack vectors. Here's how.


III – HMAC in 60 Seconds

If you already know HMAC, skip to the next section. If not, here's the essence.

HMAC stands for Hash-based Message Authentication Code. It takes two inputs — a secret key and a message — and produces a fixed-length signature.

The signature proves two things simultaneously:

First, authenticity. Only someone who possesses the secret key could have produced this signature. If you receive a message with a valid signature, you know it came from someone who has the key.

Second, integrity. The signature is computed over the entire message. If anyone changes even a single byte of the message after signing, the signature won't match. You know the message hasn't been tampered with.

The critical property of HMAC is that it's a keyed hash. A regular hash (like SHA-256) can be computed by anyone — you just hash the message. An HMAC requires the secret key. Without the key, you cannot produce a valid signature, and you cannot verify one.

This is why HMAC is perfect for inter-app communication. The platform and the app share a secret key. Every request is signed with that key. If the signature is valid, the request is authentic and untampered. If not, it's rejected.


IV – The Key Derivation Strategy

The first question is: where do the signing keys come from?

The naive approach is generating a random key for each app and storing it in a database. This works but introduces a dependency — every signature verification requires a database lookup. In a system processing thousands of inter-app requests per minute, that's a lot of lookups.

We took a different approach: deterministic key derivation.

There's a single master secret stored as an environment variable on the platform. When an app is created, its signing key is derived from the master secret and the app's slug using HMAC itself.

The derived key for "recipe-helper" is HMAC(master_secret, "recipe-helper"). The derived key for "budget-tracker" is HMAC(master_secret, "budget-tracker"). Each app gets a unique key, but none of them are stored anywhere.

Why this matters:

No database of keys to protect. No key rotation ceremony that requires updating every app's stored key. No key distribution problem — the platform can compute any app's key on the fly, because it has the master secret and knows the app's slug.

And if the master secret is compromised? You rotate it once, and every derived key changes. No app-by-app rotation.

The trade-off: every app's security depends on the master secret. But that was already true — the master secret protects everything else too. This approach doesn't increase the blast radius of a master secret compromise. It just makes key management dramatically simpler.


V – Signing a Request: The Protocol

Here's the signing protocol, step by step.

Step 1: Construct the signing payload. This isn't the raw request body. It's a canonical string that includes everything that matters for security:

  • The HTTP method (GET, POST, etc.)
  • The target URL path
  • The request body (if any), serialized deterministically
  • A timestamp (Unix epoch in seconds)
  • A nonce (random string, unique per request)

These elements are concatenated in a specific order with a delimiter. The order matters — it must be identical on both the signing and verification sides.

Step 2: Compute the HMAC. Using the derived key for the sending app, compute HMAC-SHA256 over the signing payload. This produces a 64-character hex string.

Step 3: Attach the signature to the request. Three headers are added:

  • A signature header containing the hex-encoded HMAC
  • A timestamp header containing the Unix timestamp used in signing
  • A nonce header containing the random string

The receiving side uses these three headers plus the request itself to reconstruct the signing payload and verify the signature.

Why include the timestamp and nonce in both the payload and the headers? Because the verifier needs to reconstruct the exact same signing payload. If the timestamp or nonce were only in headers (not in the signed payload), an attacker could modify them without invalidating the signature.

By including them in both places, any modification to the headers changes the payload, which changes the signature, which fails verification.


VI – Verification: The Other Side

When a request arrives, the receiving side performs verification in this order:

Check 1: Timestamp freshness. The timestamp in the header is compared against the current time. If the difference exceeds a threshold — we use 300 seconds (5 minutes) — the request is rejected.

This is the replay protection. Even if an attacker captures a valid signed request, they can only replay it within a 5-minute window. After that, the timestamp check kills it.

Why 5 minutes? It's a balance between clock skew tolerance and security. Cloudflare Workers and our Hetzner VPS both sync with NTP servers, so clock drift is minimal. But network latency can add seconds, and we'd rather have a slightly loose window than reject legitimate requests due to minor clock differences.

Check 2: Nonce uniqueness. The nonce is checked against a short-lived cache of recently seen nonces. If the nonce has been seen before within the timestamp window, the request is rejected.

This handles the edge case that timestamp freshness doesn't cover — replaying a request within the 5-minute window. The nonce ensures each request is unique even within that window.

The nonce cache auto-evicts entries older than the timestamp window. This keeps memory bounded. We don't need to remember nonces forever — just for the duration of the freshness window.

Check 3: Signature verification. The verifier reconstructs the signing payload from the request method, URL path, body, timestamp header, and nonce header. It computes HMAC-SHA256 using its copy of the signing key. If the computed signature matches the one in the header, the request is authentic and untampered.

All three checks must pass. Any single failure results in a 401 response with no additional information (to avoid leaking details to attackers).


VII – The Canonical Payload Problem

This one almost broke us.

HMAC signing requires that both sides compute the exact same signing payload. If there's even a single byte difference — an extra space, a different key order in JSON, a trailing newline — the signatures won't match.

JSON serialization is not deterministic by default.

The object { "b": 2, "a": 1 } and the object { "a": 1, "b": 2 } are semantically identical in JavaScript. But their serialized strings are different. If the sender serializes one way and the receiver another, verification fails.

We solve this with a canonical serialization function that:

  • Sorts object keys alphabetically at every nesting level
  • Removes undefined values
  • Uses no whitespace (no pretty-printing)
  • Handles special types consistently (dates become ISO strings, buffers become base64)

This function is shared between the platform and the Worker runtime. It's one of the very few pieces of code that exist in both codebases, because it absolutely must be identical.

Testing this was critical. We have a dedicated test suite that feeds the canonical serializer every edge case we could think of — nested objects, arrays with mixed types, empty objects, null values, Unicode strings, numbers with floating-point precision issues — and verifies that the output is byte-for-byte identical across both environments.

If the canonical serializer is wrong, the entire signing protocol breaks. It's the most boring and most important piece of the system.


VIII – App-to-Platform Calls

The most common inter-app communication pattern is an app calling the platform API.

A vibe app needs to check a user's credit balance, or trigger a hold on credits, or fetch configuration data. These calls go from the Cloudflare Worker sandbox to the Elysia.js server on the Hetzner VPS.

The flow:

The Worker constructs the request. Before sending, it signs the request using its derived key (which it received from the platform at deployment time, encrypted and stored in the Worker's environment variables).

The platform receives the request. It extracts the app slug from the subdomain (using the same subdomain-as-principal resolution described in a previous article). It derives the expected signing key from the master secret and the slug. It verifies the signature.

If verification passes, the platform knows three things with certainty:

  1. The request came from the app it claims to be (authenticity)
  2. The request body hasn't been modified in transit (integrity)
  3. The request is fresh, not a replay (freshness)

The platform then processes the request with the app's permissions scope. The app can only access its own data, its own credits, its own configuration.


IX – Platform-to-App Calls

The reverse direction — platform calling an app — is less obvious but equally important.

When does the platform call an app? Webhooks. The platform notifies apps about events: a payment was confirmed, a user's credits were topped up, an admin action affected the app's configuration.

Without signing, webhook delivery is a gaping security hole.

Anyone who discovers the webhook URL can send fake events to the app. "Hey, a payment of $10,000 just came through. Please deliver the goods." If the app doesn't verify the source, it acts on fake data.

The platform signs outbound webhooks using the same HMAC protocol. The app verifies the signature using its copy of the signing key. If verification fails, the webhook is ignored.

This is the same pattern that Stripe, GitHub, and Twilio use for webhook verification. We just extended it to cover all inter-app communication, not just webhooks.


X – Vibe-to-Vibe Calls: The Tricky Part

Now for the genuinely hard problem.

App A wants to call App B. This is a three-party situation: App A, App B, and the platform. App A has a signing key derived from its slug. App B has a different signing key derived from its slug. They don't share a key.

Direct HMAC signing between apps is impossible without shared keys. And we can't give App A access to App B's key — that would break isolation.

So how do we make vibe-to-vibe calls work?

The answer: the platform acts as a signing intermediary.

App A doesn't call App B directly. App A sends a request to the platform, signed with App A's key, saying "please forward this to App B." The platform verifies App A's signature, checks that App A has permission to call App B (this is configurable per-app), and then re-signs the request with App B's key before forwarding it.

App B receives the request, verifies the platform's signature using its own key, and processes it. From App B's perspective, the request came from the platform — because it did. The platform vouched for it.

This is conceptually similar to how OAuth works. The platform is the authorization server. App A authenticates with the platform. The platform issues a verified request to App B. App B trusts the platform.

The trade-off is latency. Every vibe-to-vibe call has an extra hop through the platform. But the security benefit is worth it — no app ever needs to trust another app directly. They only trust the platform.


Want to build secure inter-app communication?

The Vibe platform at vibe.oakoliver.com handles thousands of HMAC-signed requests daily across its micro-app ecosystem. If you're designing a multi-tenant platform and wrestling with inter-service authentication, I'd love to talk through the patterns.

Book a mentoring session at mentoring.oakoliver.com, or explore more engineering deep-dives at oakoliver.com.


XI – Error Handling: Silent Failures, Loud Logs

When HMAC verification fails, the response to the caller is intentionally uninformative. A 401 status with a generic "unauthorized" message. Nothing more.

We never tell the caller why verification failed.

Was the signature wrong? Was the timestamp expired? Was the nonce reused? We don't say. Because each piece of information helps an attacker refine their approach.

If we returned "timestamp expired," an attacker knows their replay timing is off and adjusts. If we returned "signature mismatch," an attacker knows they're close and keeps trying. Silence gives them nothing to work with.

But internally, we log everything.

Every failed verification produces a structured log entry with the failure reason, the claimed app slug, the timestamp drift, whether the nonce was a duplicate, and the request path. These logs feed into our monitoring system, and we alert on anomalies — like a sudden spike in verification failures from a single app, which might indicate a compromised key.

The security principle: be opaque to attackers, transparent to operators.


XII – Key Rotation Without Downtime

Keys get compromised. It happens. When it does, you need to rotate them without taking the platform offline.

Our rotation protocol works in three phases.

Phase 1: Dual acceptance. The platform begins accepting signatures from both the old derived key and the new derived key. This is implemented by attempting verification with the new key first, then falling back to the old key. During this phase, apps using the old key continue working.

Phase 2: Key distribution. Each app's Worker is redeployed with the new derived key in its environment variables. This happens gradually — not all at once. Workers pick up the new key on their next cold start or redeployment.

Phase 3: Old key revocation. Once all apps have been updated (confirmed by monitoring logs — no more verifications succeeding on the old key), the platform stops accepting the old key. The rotation is complete.

The entire process takes about an hour. No downtime. No failed requests (as long as the dual acceptance window is longer than the redeployment window).

The master secret rotation triggers this process for all apps simultaneously. Individual app key rotations — rare, but necessary if a single app's key is compromised — only affect that one app.


XIII – Performance: What Signing Costs You

Cryptographic operations aren't free. Every HMAC computation burns CPU cycles. On a platform processing thousands of requests per minute, this matters.

Here's what we measured.

Signing a request (sender side): The canonical serialization takes about 0.1ms for a typical payload. The HMAC-SHA256 computation takes about 0.05ms. Header construction is negligible. Total: roughly 0.15ms per request.

Verifying a request (receiver side): Timestamp check is negligible. Nonce lookup in the in-memory cache is about 0.02ms. Canonical serialization of the received payload takes 0.1ms. HMAC computation takes 0.05ms. Constant-time comparison takes negligible time. Total: roughly 0.2ms per request.

Combined round-trip overhead: about 0.35ms.

For context, the network latency between a Cloudflare Worker and our Hetzner VPS in Finland is typically 5-15ms. The HMAC overhead is less than 3% of the network round trip.

It's free, effectively. The security you get for 0.35ms of CPU time is extraordinary.

We also benchmarked against alternatives. JWT verification (with RSA-256) takes about 0.8ms — more than twice as slow. And JWTs are larger, adding bytes to every request header.

HMAC is fast because it's symmetric. No public/private key pairs, no certificate chains, no ASN.1 parsing. Just a hash function and a key.


XIV – Why Not Just Use JWTs?

This comes up every time I discuss this system. "Why not issue JWTs to each app and verify them on the receiving end?"

JWTs solve authentication. They prove who sent the request. But they don't solve integrity for the request body.

A JWT in a header proves that the sender is who they claim to be. But it says nothing about whether the request body was modified after the JWT was issued. An attacker who intercepts a request can swap the body while keeping the valid JWT header.

HMAC signs the entire request — method, path, body, timestamp, nonce. The body is part of the signed payload. Tamper with any piece, and verification fails.

You could include a hash of the body in the JWT claims. But now you're computing a hash for the body AND a signature for the JWT. You've done two cryptographic operations where HMAC does one. And you've added complexity for no security benefit.

JWTs are the right tool for bearer authentication. HMAC is the right tool for request signing. They solve different problems. Using JWTs for request signing is a category error.


XV – The Constant-Time Comparison Detail

This is a subtle but critical implementation detail that most tutorials skip.

When comparing the computed HMAC with the one from the request header, you must use constant-time comparison. Not regular string equality.

Why? Regular string comparison short-circuits. It returns false as soon as it finds the first mismatched character. This means comparing "abcdef" to "abcxyz" takes longer than comparing "abcdef" to "xyzdef" — because the first comparison matches three characters before failing, while the second fails immediately.

An attacker can exploit this timing difference to discover the correct signature one character at a time. Send a request, measure the response time, and gradually guess the signature byte by byte.

Constant-time comparison always takes the same amount of time regardless of where the mismatch occurs. It eliminates timing-based information leakage entirely.

This is not a theoretical attack. Timing attacks against HMAC verification have been demonstrated in peer-reviewed security research. Every serious cryptographic library includes a constant-time comparison function for exactly this reason.

If you're implementing HMAC verification and using regular string equality for comparison, you have a vulnerability. Full stop.


XVI – The Nonce Cache: Memory-Bounded Replay Protection

The nonce cache deserves its own discussion because getting it wrong creates either a security hole (too loose) or a memory leak (too tight).

The requirements:

  • Store every nonce seen within the timestamp freshness window (5 minutes)
  • Reject any request with a previously seen nonce
  • Automatically evict nonces older than the freshness window
  • Bounded memory usage — can't grow forever

The solution: A combination of a set for O(1) lookups and a queue for time-ordered eviction.

When a request arrives, the nonce is checked against the set. If present, the request is rejected. If not, the nonce is added to both the set and the queue (with its timestamp).

A periodic cleanup sweeps the queue, removing entries older than the freshness window and deleting them from the set.

Why not just use a Map with TTL? Because most TTL-based caches evict lazily — they only check expiration when you access the key. For a nonce cache, you need proactive eviction to keep memory bounded. If you're processing 1,000 requests per minute, that's 5,000 nonces in the 5-minute window. Without proactive eviction, the cache grows indefinitely.

Our cleanup runs every 60 seconds. It processes the queue front-to-back (oldest first) and stops as soon as it hits a non-expired entry. In steady state, this takes less than 1ms.


XVII – Debugging Signed Requests

HMAC signing makes debugging harder. You can't just curl an endpoint anymore — you need to sign the request first.

We solved this with three tools:

A signing CLI tool. A command-line utility that takes a method, URL, body, and app slug, and outputs the signed request with all headers. Developers use this for manual testing.

A verification debug mode. In development only, failed verifications return detailed error information — which step failed, what the expected and actual values were, and what the reconstructed signing payload looked like. This is disabled in production (for the reasons discussed in the error handling section).

Request logging with signature metadata. Every signed request (in development) is logged with the full signing payload, the computed signature, and the result. This makes it trivial to spot canonicalization mismatches.

The verification debug mode has saved us dozens of hours. The most common bug during development is a canonicalization mismatch — the sender and receiver serialize the payload differently. The debug output shows exactly where the divergence is.


XVIII – Lessons from Building This System

After running HMAC-signed inter-app communication in production for months, here are the lessons that weren't obvious at the start.

Canonical serialization is harder than signing. The HMAC part is straightforward — every language has a crypto library. The canonical serialization is where bugs hide. Different JSON serializers handle edge cases differently. Test this obsessively.

Clock synchronization matters more than you think. A 5-minute freshness window seems generous. But if one service's clock drifts 3 minutes, you only have a 2-minute effective window. Monitor clock drift. Use NTP. Alert on drift.

Nonce generation must be cryptographically random. Using sequential IDs or timestamps as nonces defeats the purpose. An attacker who can predict the nonce can precompute signatures. Use your runtime's cryptographic random generator.

Key derivation is a force multiplier. The decision to derive keys from a master secret instead of storing individual keys eliminated an entire category of operational problems — key storage, key distribution, key synchronization. The simplicity is worth the single-point-of-failure trade-off.

Signing should be invisible to app developers. The signing and verification logic is in the platform's HTTP client library. App developers make a normal API call, and the library handles signing transparently. If developers had to manually sign requests, they'd get it wrong.


XIX – The Trust Hierarchy

Zooming out, the HMAC signing system creates an explicit trust hierarchy for the Vibe platform.

Level 1: The master secret. This is the root of trust. Everything derives from it. It lives in an environment variable on the platform server. It's never transmitted, never logged, never exposed in any API.

Level 2: Derived app keys. Each app's signing key is derived from the master secret. These keys are distributed to Worker sandboxes via encrypted environment variables. They can sign requests to the platform and verify requests from the platform.

Level 3: Signed requests. Individual requests are authenticated by their HMAC signatures. They're valid for 5 minutes. They're unique (via nonces). They prove both identity and integrity.

This hierarchy means compromise at any level has a bounded blast radius. A compromised request signature affects one request. A compromised app key affects one app. Only a compromised master secret affects the entire platform.

That's the goal of any security architecture: bounded blast radius. Not preventing compromise entirely (impossible), but ensuring that when compromise happens, the damage is contained.


XX – Beyond HMAC: When You Need More

HMAC signing solves our problem perfectly because our threat model is well-defined: app impersonation, request tampering, replay attacks. All within a controlled ecosystem where the platform is the root of trust.

But HMAC has limitations.

No non-repudiation. Because HMAC is symmetric (both sides have the same key), a receiver can forge a signature that looks like it came from the sender. In our system this doesn't matter — the platform is the trust root, and we don't need to prove to a third party who sent what. But in systems where non-repudiation matters (financial transactions with external auditors), you'd need asymmetric signatures.

No forward secrecy. If a key is compromised, all past communications signed with that key are retroactively de-authenticated. An attacker with the key can forge signatures for any past timestamp (within the freshness window). For our use case this is acceptable — we detect key compromise through monitoring and rotate quickly.

No payload encryption. HMAC proves authenticity and integrity but doesn't encrypt the payload. Our inter-app requests travel over HTTPS, so encryption is handled by TLS. But in environments without TLS (internal networks, for instance), you'd need to add encryption separately.

If your threat model requires any of these properties, HMAC alone isn't enough. But for the vast majority of inter-service authentication scenarios — especially within a single-operator platform like Vibe — HMAC is the right level of security at the right level of complexity.


XXI – The Question That Started This

I built this system because of a question that kept me up at night:

If a user running code on my platform decided to attack another user on my platform, how would I even know?

The answer before HMAC signing was: I wouldn't. One Worker could forge a request to look like another Worker. The platform would process it. No logs, no alerts, no evidence.

Now the answer is: I'd know immediately. The forged signature would fail verification. The request would be rejected. The failure would be logged. The alert would fire.

Security isn't about eliminating all attacks. It's about making attacks visible and contained.

So here's my question for you:

If two services in your system talk to each other right now, what actually proves that the message came from who it claims?

If the answer is "the network" or "the firewall" or "we trust internal traffic" — you might want to rethink that.

– Antonio

"Simplicity is the ultimate sophistication."