Reputation monitoring that overwrites the current value on each update cannot answer the question every deliverability incident requires: was reputation already trending down before the campaign that triggered the alert, or did it spike suddenly? Trend analysis distinguishes a gradual erosion (a list quality problem) from an acute spike (a bad campaign or an infrastructure misconfiguration). Without historical rows, you cannot correlate reputation changes with specific sends, detect seasonal patterns, or demonstrate to an ESP's abuse team that a reputation problem is resolved. The operational blind spot shows up worst in post-incident review.
Low because single-row upsert storage does not cause immediate delivery failure, but it eliminates the ability to perform trend analysis or root-cause incident review without external tooling.
Add a time-series model alongside (not replacing) the current-value model. Use a composite unique key on (domain, date) so daily runs are idempotent:
// In prisma/schema.prisma
model DomainReputationHistory {
id String @id @default(uuid())
domain String
date DateTime @db.Date
spamRate Float?
inboxRate Float?
dkimPassRate Float?
createdAt DateTime @default(now())
@@unique([domain, date])
@@index([domain, date])
}
// Daily cron — upsert so reruns are safe
await db.domainReputationHistory.upsert({
where: { domain_date: { domain, date: startOfDay(new Date()) } },
update: { spamRate, inboxRate, dkimPassRate },
create: { domain, date: startOfDay(new Date()), spamRate, inboxRate, dkimPassRate }
})
Query this table with a 30-day or 90-day window to render trend charts and detect sustained degradation. Keep the single-row DomainReputation model for current-state lookups that need low latency.
ID: deliverability-engineering.warmup-reputation.reputation-history
Severity: low
What to look for: Count all reputation data tables and classify each as "time-series" or "current-value-only." Check whether reputation metrics (complaint rates, bounce rates, inbox rates, domain/IP reputation scores) are stored over time in a time-series fashion rather than only keeping the current value. Look for database tables with a date or recordedAt field on reputation rows, or an append-only schema vs. a single-row upsert.
Pass criteria: At least 1 reputation metric table uses time-series storage (one row per day or per measurement, not just a single current-value row). Historical data allows trend analysis over weeks or months.
Fail criteria: Reputation data is stored as a single current-value record that is overwritten on each update, making trend analysis impossible without external tooling.
Skip (N/A) when: The project has no reputation monitoring integration at all (covered separately).
Detail on fail: "Reputation data upserted to a single row per domain — no history retained, trend detection requires external tooling" or "No reputation data stored at all"
Remediation: Use an append-only schema for reputation history:
model DomainReputationHistory {
id String @id @default(uuid())
domain String
date DateTime @db.Date // One row per day
spamRate Float?
inboxRate Float?
dkimPassRate Float?
createdAt DateTime @default(now())
@@unique([domain, date]) // Idempotent daily inserts
@@index([domain, date])
}
// Insert (not upsert) on each daily fetch:
await db.domainReputationHistory.upsert({
where: { domain_date: { domain, date: startOfDay(new Date()) } },
update: { spamRate, inboxRate, dkimPassRate },
create: { domain, date: startOfDay(new Date()), spamRate, inboxRate, dkimPassRate }
})