Skip to content

← all backend comparisons

async

WebSocket broadcast: hand-rolled hub vs broadcast::Sender

A single ticker per backend mutates an owned price cache and fans out updates to every connected WebSocket subscriber. tokio::sync::broadcast::Sender does this in 25 lines of hub code; the equivalent Go hub is ~80.

Go (chi · sqlc · pgx)
Hand-rolled hub: RWMutex + map + per-client chan
go/internal/ws/hub.go
// shop-two-backends not found at build time
WebSocket handler — coder/websocket
go/internal/httpserver/prices_ws.go
// shop-two-backends not found at build time
Rust (axum · sqlx · tokio)
broadcast::Sender — runtime fan-out
rust/src/ws/hub.rs
// shop-two-backends not found at build time
WebSocket handler — axum built-in
rust/src/ws/prices.rs
// shop-two-backends not found at build time

What to take away

Every Go WebSocket tutorial reproduces the same pattern as the hub on the left, because there's nothing in the standard library that does it for you. Slow-consumer policy, lock granularity, channel sizing, and lifecycle are all your problem.

broadcast::Sender is part of tokio. Cloning a Sender creates a new producer; calling .subscribe() creates a new Receiver that gets every future message. Slow consumers receive RecvError::Lagged instead of silently missing data — a property the Go hub has to implement (or skip) explicitly.

The application code has fewer correctness levers to get wrong, and the comparison test (open 2 clients, assert both see a shared slug) confirms fan-out works on both sides.