An in-memory queue — a plain array, a Map, or a setInterval drain loop — lives entirely in the Node.js process heap. When the process restarts after a deploy, a crash, or an OOM kill, every pending email in that queue vanishes. Recipients who triggered a password-reset or order-confirmation during that window never receive it. CWE-400 covers resource exhaustion from unbounded growth; here the risk is the inverse — the store has zero durability. ISO-25010:2011 reliability.fault-tolerance requires components to survive fault conditions; in-process storage is the antithesis of that property.
Critical because a process restart silently drops all pending sends, and the application has no mechanism to detect or replay the lost jobs.
Replace any in-process queue with BullMQ backed by Redis, or pg-boss backed by PostgreSQL in lib/queue.ts. Initialize the connection at startup with an explicit URL from environment variables:
// lib/queue.ts
import { Queue, Worker } from 'bullmq'
import IORedis from 'ioredis'
const connection = new IORedis(process.env.REDIS_URL, {
maxRetriesPerRequest: null
})
export const emailQueue = new Queue('email', { connection })
For serverless deployments where Redis cost is prohibitive, use pg-boss against your existing DATABASE_URL.
ID: sending-pipeline-infrastructure.queue-architecture.persistent-queue
Severity: critical
What to look for: Enumerate all queue implementations in the codebase. Count the number of in-memory data structures (plain arrays, in-process event emitters, setImmediate loops) used as the primary queue versus durable stores (Redis, PostgreSQL, RabbitMQ, SQS, Cloud Tasks). A durable queue is backed by a persistent external store. Check that the queue configuration connects to an external backing store with an explicit connection URL.
Pass criteria: The queue is backed by at least 1 persistent external store (Redis, PostgreSQL, RabbitMQ, SQS, or equivalent). Jobs survive a process restart. The backing store connection is initialized at application startup with an explicit connection URL or configuration. Before evaluating, quote the exact queue initialization code and the connection configuration. Do NOT pass when the queue library is installed but no connection to an external store is configured.
Fail criteria: Emails are queued using in-memory arrays, Map objects, plain event emitters, or any structure that lives only in the Node.js process heap. A restart drops all pending sends.
Skip (N/A) when: The project sends email synchronously inline with the request (no queue) and the throughput is explicitly designed to be low-volume fire-and-forget — confirmed by the absence of queue libraries in package.json.
Detail on fail: Describe the in-memory pattern found. Example: "Emails pushed to a module-level array and drained by a setInterval loop — all pending sends lost on process restart" or "BullMQ initialized without a Redis connection — queue is non-functional and falls back to in-process processing"
Remediation: Use a queue library backed by a real store. BullMQ with Redis is the most common choice for Node.js:
// lib/queue.ts
import { Queue, Worker } from 'bullmq'
import IORedis from 'ioredis'
const connection = new IORedis(process.env.REDIS_URL, {
maxRetriesPerRequest: null // required by BullMQ
})
export const emailQueue = new Queue('email', { connection })
// Enqueue a send job
await emailQueue.add('send', {
to: recipient.email,
templateId: 'welcome',
mergeFields: { firstName: recipient.firstName }
}, {
attempts: 5,
backoff: { type: 'exponential', delay: 2000 }
})
For serverless environments where Redis is too expensive, use pg-boss (PostgreSQL-backed):
import PgBoss from 'pg-boss'
const boss = new PgBoss(process.env.DATABASE_URL)
await boss.start()
await boss.send('email:send', { to, templateId, mergeFields })