// blog

← all posts

Building an ngrok alternative in 800 lines of Go

·10 min read·architecture · go · networking

ngrok is wonderful. It's also expensive once you grow past the free plan, and the URL-rotation behavior on free is a daily annoyance for anyone integrating Stripe or GitHub webhooks during development. So I built lrok.

This post is the architectural tour: what's in the Go binary, how the bytes flow, and the choices that kept the whole edge under 1000 lines of meaningful code.

The core idea

A reverse tunnel is a process on your laptop holding a long-lived TCP connection to a server with a public IP. When the public side receives an HTTP request, it ships the bytes back over that connection. The agent on your laptop forwards them to localhost, gets a response, and sends it back the other way.

The server-side complexity is in two places:

  1. Multiplexing many concurrent requests over the single agent connection. Without this, you'd open one TCP connection per request, which is slow and ugly behind NAT.
  2. Routing the public request to the right agent. Many users, many subdomains.

For (1) we use yamux. One yamux session per agent; one yamux stream per HTTP request. The agent's HTTP request handler reads the stream, dials localhost:3000, and pipes both directions until either side closes.

For (2) we keep an in-memory Registry keyed by subdomain, populated when the agent registers and cleaned up when the session closes. The reverse proxy reads r.Host, looks up the entry, and the request travels the right tunnel.

type Registry struct {
    mu      sync.RWMutex
    entries map[string]*Entry  // key: subdomain
}

func (r *Registry) Get(sub string) (*Entry, bool) { ... }

That's the whole thing in two paragraphs. The rest is finishing.

TLS, but reuse Traefik's cert

The agent transmits its API key in the first frame, so the agent ↔ edge connection has to be TLS-terminated. But ACME-managing yet another cert on the same box that already has Traefik for *.lrok.io would mean two ACME clients fighting over the rate limit.

Solution: read Traefik's acme.json directly. Pick the entry whose domain is tunnel.lrok.io, hand it to a tls.Config{GetCertificate: ...}, watch the file with fsnotify, reload on change. ~120 lines, no new ACME logic.

The edge process never modifies acme.json. Traefik owns renewal; we just consume.

Why one binary

Most "tunnel" stacks I'd seen split this into three services: an L4 proxy, an L7 proxy, and a control plane. That's reasonable at scale; it's overkill for a single edge in Helsinki.

So lrok-edge is one Go binary with:

  • An HTTP forwarder serving *.lrok.io (Traefik-terminated).
  • A yamux control plane on :7000 (TLS, agent connections).
  • A TCP forwarder for tcp.lrok.io:30000-30099 (raw TCP tunnels).
  • A control-plane API mounted on the same :8080 HTTP listener under api.lrok.io and /__lrok/....
  • An admin surface (/api/v1/admin/*) gated by LROK_ADMIN_USER_IDS.

Routing between "tunnel forwarder" and "control plane API" is a Host check. r.Host == "api.lrok.io" → API handler; otherwise → reverse proxy. ~10 lines in main. No service mesh, no per-host pod, no internal DNS.

The request inspector

When you tunnel HTTP, lrok captures up to 100 recent requests per tunnel and serves them at /dashboard/tunnels/<sub>. Method, path, status, latency, full body.

This lives entirely in the forwarder process — a per-tunnel ring buffer. Captures evict on tunnel close. Never persisted to disk. The dashboard subscribes via Server-Sent Events on /api/v1/me/tunnels/<sub>/requests/stream.

The privacy implication mattered for the design: customers' webhook bodies often contain PII. Storing them, even briefly on disk, would force a much bigger compliance footprint. The in-memory ring keeps the threat model simple — only the tunnel's owner can read their own captures, and they vanish when the tunnel closes.

What's not in here

  • A relay protocol of our own. yamux already exists, is well-tested, and handles flow control. Inventing one would have been a year-long distraction.
  • An ACME client. Traefik does that.
  • A queue or message bus for analytics. We use PostHog (browser + server) and self-host the operator dashboard counters in a 200-line tracker package.
  • Custom DNS. Namecheap fronts lrok.io and the user's CNAME points at us; verification is a TXT record.

The headline win of this whole design is that I can deploy it with go build && scp. Single binary. Single config. One process to crash-restart on. The ops surface is a fraction of what a microservice mesh would be.

What's next

The next milestones are visible in the changelog: TLS pass-through (raw TLS tunnels for clients that handle their own certs), regional edges, and an OSS agent (the binary you run on your laptop) so people can audit the wire protocol.

If you want to try it: curl -fsSL https://lrok.io/install.sh | sh. Free plan keeps a reserved subdomain forever.

// try it yourself

lrok is a hosted reverse-tunnel: one command, public HTTPS URL, reserved subdomain on the free plan. $9/mo flat for unlimited.