In-process broadcast with io.to(channel).emit() only reaches clients connected to the same server instance. The moment you deploy two instances — behind a load balancer or via auto-scaling — users on different instances stop receiving each other's messages. This is a latent failure: the app works perfectly in development and single-instance staging, then silently breaks in production the first time the process count exceeds one. ISO 25010 reliability requires the system to function correctly under expected deployment configurations, including horizontal scaling.
High because in-process-only broadcast silently partitions users across server instances, causing message loss that is invisible in single-instance testing.
Route all broadcasts through a shared pub/sub layer — Redis is the standard choice for Socket.IO via the @socket.io/redis-adapter package.
import { createAdapter } from '@socket.io/redis-adapter';
import { createClient } from 'redis';
const pub = createClient();
const sub = pub.duplicate();
await Promise.all([pub.connect(), sub.connect()]);
io.adapter(createAdapter(pub, sub));
// io.to(channel).emit() now fans out across all instances
Alternatively, use NATS or Kafka if your infrastructure already includes them. The skip condition applies only when the deployment is permanently single-process — document that constraint explicitly if you take the skip.
ID: community-realtime.message-delivery.pubsub-cross-instance
Severity: high
What to look for: Enumerate all message broadcast paths. For each, classify whether it uses a shared pub/sub system or only in-process memory. Count the pub/sub integrations: Redis PUBLISH/SUBSCRIBE, NATS, Kafka, or equivalent.
Pass criteria: Messages are published to at least 1 shared pub/sub system (Redis, NATS, Kafka) that delivers them to all connected server instances, not just in-process connections.
Fail criteria: Messages are broadcast only in-process, or pub/sub is not used for a multi-instance setup.
Skip (N/A) when: The deployment is single-instance and never scales to multiple processes.
Cross-reference: For horizontal scaling patterns and infrastructure design, the SaaS Infrastructure Audit covers multi-instance architecture.
Detail on fail: "Messages broadcast only to in-process connections. In a multi-instance deployment, users on different servers would miss messages."
Remediation: Use a pub/sub system to coordinate real-time state across instances:
import redis from 'redis';
const publisher = redis.createClient();
const subscriber = redis.createClient();
socket.on('send_message', async (data) => {
const message = { id: nanoid(), content: data.content, userId: socket.userId };
// Publish to Redis so all instances see it
await publisher.publish(data.channel, JSON.stringify(message));
// Also emit locally for immediate feedback
io.to(data.channel).emit('message', message);
});
// Subscribe to messages from other instances
subscriber.subscribe('all-channels', (message) => {
const msg = JSON.parse(message);
io.to(msg.channel).emit('message', msg);
});