The simplest version
Server-Sent Events (SSE) is a way for a server to stream a sequence of messages to a connected browser over a single long-lived HTTP response. The browser opens new EventSource('/stream'); the server keeps the connection open and writes data: ...\n\n per message; the client receives them as onmessage events.
That's the whole API.
SSE vs WebSockets
| SSE | WebSockets | |
|---|---|---|
| Direction | Server → client only | Bidirectional |
| Protocol | Plain HTTP | Upgrade to ws:// |
| Reconnect | Built-in (browser) | Manual |
| Browser API | EventSource (4 lines) |
WebSocket (more) |
| Through proxies | Works through any HTTP-aware proxy | Requires proxy that handles HTTP/1.1 upgrade |
| Server complexity | Just keep writing | Frame protocol |
The right way to think about it: WebSockets is a duplex socket pretending to be HTTP. SSE is HTTP with a long response body and a parsing convention. If you don't need the duplex direction, SSE is simpler at every layer.
When SSE is the right pick
- Live dashboards — server pushes counters / events; client doesn't talk back. Stripe Dashboard, Vercel deployments, GitHub Actions log tail are all SSE-shaped.
- AI streaming responses — GPT / Claude / Gemini SDKs all stream tokens via SSE. The Anthropic / OpenAI APIs are SSE under the hood.
- Notifications — server tells the client about a new email / message / alert. Client just needs to read.
- Long-running job progress — emit a
progress: 45%event every couple seconds.
When you need the client to send messages too (chat input, collaborative cursors, game state), use WebSockets. SSE plus an unrelated POST endpoint can also work but tends to feel awkward.
The SSE wire format
Trivial:
HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache
event: progress
data: {"step": 1, "total": 5}
event: progress
data: {"step": 2, "total": 5}
data: this is a default-event message
event: done
data: {"ok": true}
Each message is a few lines + a blank line. The browser's EventSource parses it.
The stack-specific gotchas
- Buffering — your reverse proxy might buffer the response, batching events into multi-second chunks. nginx requires
proxy_buffering off;for the SSE route. Cloudflare's free plan buffers; their Pro+ plans don't. - Reconnect storms —
EventSourcereconnects automatically on disconnect. If your server crashes and 10,000 clients are connected, they all reconnect within seconds. Implement exponential backoff or rate-limiting at the edge. - HTTP/2 multiplexing — SSE under HTTP/2 doesn't suffer the 6-connections-per-origin limit of HTTP/1.1. If you have many open SSE streams on one origin, HTTP/2 helps.
Through a tunnel
SSE is plain HTTP — every reverse-tunnel service handles it. lrok specifically streams response bodies without buffering, which matters because the typical SSE event is a few hundred bytes and a buffering proxy delays them by up to a second.
$ lrok http 3000
If your dev server emits SSE events, they arrive in the browser instantly through the lrok tunnel. We use the same code path for the dashboard's request inspector, which itself is SSE.
Try it locally
// server.js (Node)
import { createServer } from 'http'
createServer((req, res) => {
if (req.url === '/stream') {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
})
let i = 0
const id = setInterval(() => {
res.write(`data: tick ${++i}\n\n`)
if (i >= 10) { clearInterval(id); res.end() }
}, 1000)
} else {
res.end('<script>new EventSource("/stream").onmessage = e => document.body.append(e.data + " ")</script>')
}
}).listen(3000)
$ node server.js &
$ lrok http 3000
Open the lrok URL in a browser; you'll see ticks land one per second.