async
WebSocket broadcast: hand-rolled hub vs broadcast::Sender
A single ticker per backend mutates an owned price cache and fans out updates to every connected WebSocket subscriber. tokio::sync::broadcast::Sender does this in 25 lines of hub code; the equivalent Go hub is ~80.
// shop-two-backends not found at build time // shop-two-backends not found at build time // shop-two-backends not found at build time // shop-two-backends not found at build time What to take away
Every Go WebSocket tutorial reproduces the same pattern as the hub on the left, because there's nothing in the standard library that does it for you. Slow-consumer policy, lock granularity, channel sizing, and lifecycle are all your problem.
broadcast::Sender is part of tokio.
Cloning a Sender creates a new producer; calling
.subscribe() creates a new Receiver that
gets every future message. Slow consumers receive
RecvError::Lagged instead of silently missing data —
a property the Go hub has to implement (or skip) explicitly.
The application code has fewer correctness levers to get wrong, and the comparison test (open 2 clients, assert both see a shared slug) confirms fan-out works on both sides.