Individual bounce and complaint event rows tell you what happened; per-campaign and per-domain aggregate metrics tell you which campaigns and which domains are causing the problem. A campaign that achieves a 5% bounce rate damages reputation differently from one achieving 0.5% — but without precomputed aggregate metrics, identifying the high-bounce campaign requires an ad-hoc query that no one runs until the deliverability crisis is already underway. Per-domain aggregates enable the question: is the bounce rate increasing consistently for a specific sending domain, indicating DNS authentication drift, or is it isolated to specific campaigns, indicating list quality problems?
Low because the raw events are still stored and queryable, but the absence of precomputed aggregates means actionable signals only surface through manual analysis rather than automated alerting.
Compute and store bounce and complaint rates after each campaign completes and on a daily cron for domain-level metrics. Upsert to a campaignMetrics table so the result is idempotent:
export async function computeCampaignBounceRate(
campaignId: string
): Promise<{ bounceRate: number; complaintRate: number }> {
const [sent, bounces, complaints] = await Promise.all([
db.emailLog.count({ where: { campaignId } }),
db.bounceEvent.count({ where: { campaignId } }),
db.complaintEvent.count({ where: { campaignId } })
])
const bounceRate = sent > 0 ? bounces / sent : 0
const complaintRate = sent > 0 ? complaints / sent : 0
await db.campaignMetrics.upsert({
where: { campaignId },
update: { bounceRate, complaintRate, updatedAt: new Date() },
create: { campaignId, bounceRate, complaintRate }
})
return { bounceRate, complaintRate }
}
Alert when bounceRate > 0.02 (2% is a widely cited warning threshold) or complaintRate > 0.0008. Surface both metrics in the campaign reporting UI so marketing can self-diagnose without waiting for engineering.
ID: deliverability-engineering.bounce-fbl.bounce-rate-monitoring
Severity: low
What to look for: Count all bounce/complaint rate aggregation queries or jobs. Check whether the system computes and stores bounce and complaint rates aggregated by campaign and by sending domain. Look for database queries or scheduled jobs that compute bounces / sent and complaints / sent and store those rates. Check if these metrics are queryable for a specific campaign to identify which campaigns are causing reputation damage.
Pass criteria: At least 2 aggregation dimensions exist: bounce rate per campaign and per sending domain. Report the count of metrics tracked even on pass. The system can answer "what was the bounce rate for campaign X" and "what is the 30-day average complaint rate for domain Y".
Fail criteria: Bounce and complaint events are stored individually but no aggregated rate metrics are computed or stored per campaign or per domain.
Skip (N/A) when: The project sends transactional email only with no campaign concept.
Detail on fail: "Bounce events stored individually but no per-campaign or per-domain rate metrics computed — identifying high-bounce campaigns requires manual aggregation queries" or "No campaign-level bounce tracking found"
Remediation: Compute and store bounce rates per campaign:
export async function computeCampaignBounceRate(
campaignId: string
): Promise<{ bounceRate: number; complaintRate: number }> {
const [sent, bounces, complaints] = await Promise.all([
db.emailLog.count({ where: { campaignId } }),
db.bounceEvent.count({ where: { campaignId } }),
db.complaintEvent.count({ where: { campaignId } })
])
const bounceRate = sent > 0 ? bounces / sent : 0
const complaintRate = sent > 0 ? complaints / sent : 0
await db.campaignMetrics.upsert({
where: { campaignId },
update: { bounceRate, complaintRate, updatedAt: new Date() },
create: { campaignId, bounceRate, complaintRate }
})
return { bounceRate, complaintRate }
}