A queue worker with no memory limit can consume all available host memory during a burst, taking down the database, Redis, and web server on the same host. CWE-400 (uncontrolled resource consumption) applies directly. The connection pool problem is multiplicative: 5 replicas with a default pg connection pool of 10 connections each create 50 connections — fine for one replica, catastrophic for five on a database with a max_connections of 25. ISO 25010 performance-efficiency.resource-utilization requires that resource consumption is bounded and predictable.
Low because resource limits protect co-located services from worker memory exhaustion and connection pool saturation, but the failure mode is gradual rather than instantaneous.
Add memory limits at the container level and set explicit connection pool sizes. In docker-compose.yml:
services:
email-worker:
mem_limit: 512m
environment:
- DATABASE_POOL_SIZE=5 # 5 per instance × replicas
And in the worker startup command:
node --max-old-space-size=400 dist/worker.js
Calculate pool size as: floor(max_connections / expected_replicas). At least 2 of 3 resource limits — memory, DB connection pool, Redis connection pool — must be explicitly configured.
ID: operational-resilience-email.capacity-scaling.resource-limits-enforced
Severity: low
What to look for: Check for memory and connection pool limits on queue workers: container memory limits (Docker mem_limit or Kubernetes resources.limits.memory), Node.js --max-old-space-size flag, database connection pool size configuration, Redis connection pool limits. Unbounded workers can exhaust shared resources and take down adjacent services.
Pass criteria: Worker containers have a defined memory limit of no more than 2048 MB. Database and Redis connection pools have explicit max connection counts configured (not library defaults). Count all resource limit configurations present — at least 2 of 3 (memory, DB pool, Redis pool) must be explicitly set.
Fail criteria: No memory limits on worker containers. Connection pool uses default settings (which may be unbounded or very high). Or fewer than 2 of 3 resource limits are explicitly configured.
Skip (N/A) when: Workers run on managed serverless infrastructure (e.g., Lambda, Cloud Run) where resource isolation is enforced by the platform — confirmed by deployment configuration.
Detail on fail: "Worker container has no memory limit — a memory leak or burst would consume all host memory" or "Database connection pool uses pg default (10) but 20 worker replicas would create 200 concurrent connections — exceeds database limit"
Remediation: Add resource limits:
# docker-compose.yml
services:
email-worker:
mem_limit: 512m
environment:
- DATABASE_POOL_SIZE=5 # 5 per instance × replicas
And in the worker process:
node --max-old-space-size=400 dist/worker.js