All 28 checks with why-it-matters prose, severity, and cross-references to related audits.
Tables without explicit primary keys are unpredictable to query, replicate, and join. Without a declared PK, Postgres cannot enforce entity uniqueness at the database level, ORMs like Prisma refuse to generate correct update and delete operations, and replication tools (Supabase logical replication, AWS DMS) silently skip or mis-replicate affected rows. Junction tables without composite PKs allow duplicate relationship rows that violate relational integrity. The ISO 25010 functional-correctness requirement is unambiguous: every entity must have a stable, unique identifier declared in the schema — not inferred from an implicit rowid that disappears the moment you add a real column.
Why this severity: Critical because missing PKs break ORM operations, block correct replication, and allow duplicate rows in junction tables — all three failures can corrupt application state silently.
database-design-operations.schema-design.primary-keys-definedSee full patternWithout foreign key constraints, the database allows orphan rows — comments that reference deleted posts, orders that reference nonexistent users, line items whose product was removed. Application code must then paper over these inconsistencies with defensive null checks, producing unpredictable behavior in edge cases. Missing `ON DELETE` clauses mean the database default (`NO ACTION`) silently blocks parent deletes or, in some configurations, leaves orphans. ISO 25010 functional-correctness requires that referential constraints are enforced at the storage layer, not assumed from application logic that can be bypassed by direct SQL access, background jobs, or migration scripts.
Why this severity: Critical because orphan rows corrupt business-critical data relationships and can only be detected after the fact, often requiring expensive reconciliation queries.
database-design-operations.schema-design.foreign-keys-presentSee full patternAn unindexed email column means every login attempt performs a full table scan. An unindexed FK column means every JOIN on that relationship scans the entire child table. At 10,000 rows this is slow; at 100,000 rows it causes timeouts. Postgres does not automatically index FK columns — unlike MySQL — so every `*_id` column added by an ORM is unindexed until explicitly declared. ISO 25010 time-behaviour directly captures this: query latency degrades predictably as data volume grows, and the degradation is sharp rather than gradual when no index exists. Missed indexes on `status` or `created_at` columns produce full scans on the most common listing and filtering queries in the application.
Why this severity: High because missing indexes cause full table scans on every query of the affected column, producing latency that grows linearly with row count and degrades user experience at scale.
database-design-operations.schema-design.indexes-on-queriesSee full patternNullable required fields shift data integrity guarantees from the database to application code. When `users.email` or `orders.status` allows NULL, any code path — a background job, a direct SQL migration, a third-party integration — can insert invalid rows that the application is not equipped to handle. The downstream effect is defensive `?.` and null-coalescing operators scattered throughout the codebase, and sporadic runtime errors when those checks are missed. ISO 25010 functional-correctness treats this as a schema design failure: fields that have no valid null state should have that stated in the schema, not enforced by convention.
Why this severity: High because nullable required fields permit invalid data to enter the database from any write path, including migration scripts and background jobs that bypass application validation.
database-design-operations.schema-design.not-null-constraintsSee full patternMonetary values stored as `FLOAT` lose cents through floating-point rounding — a `$29.99` price becomes `$29.989999...` in arithmetic, producing billing discrepancies. Timestamps stored as `VARCHAR` cannot be range-queried or timezone-converted by the database. Enums stored as unbounded `TEXT` accept any string value, so typos like `'actve'` or `'CANCELED'` silently enter the database and bypass all application-level validation. These are not hypothetical: AI-generated schemas routinely default to `TEXT` for everything and `FLOAT` for any numeric value. ISO 25010 functional-correctness penalizes these because the database cannot enforce domain constraints it was not given.
Why this severity: Medium because type mismatches cause data corruption and silent validation failures, but are typically contained to specific columns rather than system-wide integrity failures.
database-design-operations.schema-design.appropriate-typesSee full patternInconsistent naming conventions are a signature of AI-generated schemas assembled across multiple sessions without coherence review. A schema mixing `userId`, `post_id`, `CreatedAt`, and `updated_at` forces developers to look up column names for every query, breaks ORM mappings that assume a consistent convention, and produces mapping errors when tools like Prisma generate TypeScript types. ISO 25010 maintainability-readability captures the downstream cost: every new developer must reverse-engineer the implicit rules before they can write correct queries. Naming inconsistencies also cause real bugs when case-sensitive column lookups fail in production queries.
Why this severity: Low because naming inconsistencies degrade developer productivity and maintainability without directly causing runtime failures or data loss in most ORM setups.
database-design-operations.schema-design.naming-conventionsSee full patternN+1 queries are the most common performance failure in AI-generated API code. A posts listing endpoint that fetches 50 posts and then runs 50 individual user lookups executes 51 database round-trips instead of 1. At low traffic this is invisible; at 100 concurrent users, each triggering a 51-query path, the connection pool saturates and requests queue. The pattern emerges because AI coding tools write `posts.map(p => getUser(p.userId))` idiomatically in JavaScript without recognizing the database implications. ISO 25010 time-behaviour captures the direct impact: response latency scales linearly with list size rather than being constant.
Why this severity: Critical because N+1 patterns multiply database round-trips with list size, saturating connection pools under moderate load and causing cascading timeouts.
database-design-operations.query-patterns.no-n-plus-oneSee full patternSQL injection via string interpolation is OWASP A03 and CWE-89, and it remains the most exploited database vulnerability in production applications. A single unparameterized query — `` `SELECT * FROM users WHERE email = '${email}'` `` — lets an attacker bypass authentication, extract all rows, or drop tables by submitting crafted input. AI-generated code frequently uses template literals for readability without recognizing the injection risk. Even one unparameterized query with user input is a critical vulnerability regardless of how many other queries are safe, because an attacker will find and exploit that one path.
Why this severity: Critical because a single unparameterized query allows full database compromise through SQL injection, with no remediation possible short of fixing the code and rotating credentials.
database-design-operations.query-patterns.parameterized-queriesSee full patternCreating a new database connection per request exhausts the Postgres connection limit in minutes under load. Postgres allows roughly 100 connections by default (adjustable but resource-constrained); a serverless function that opens a new `PrismaClient()` or `new Pool()` per invocation can exhaust this from a single spike of concurrent requests, producing `too many connections` errors for all users. CWE-400 (uncontrolled resource consumption) applies directly: connection handles are a finite resource, and unbounded creation degrades the service for all tenants. ISO 25010 resource-utilisation captures the connection efficiency requirement.
Why this severity: High because new-connection-per-request exhausts the Postgres connection limit rapidly under moderate concurrency, producing hard failures for all users rather than degraded performance.
database-design-operations.query-patterns.connection-poolingSee full patternMulti-step mutations without transactions are a silent data corruption risk. When an order creation writes the order row but then fails before writing the order items, the application has a phantom order with no line items — a business record that is inconsistent at the database level and requires manual cleanup. This is a time-of-check/time-of-use race condition (CWE-362) and a functional-correctness failure per ISO 25010: either all writes succeed atomically or none do. AI-generated code frequently sequences multiple `prisma.create()` calls without recognizing that network or database failures between them leave partial state.
Why this severity: High because non-atomic multi-step writes produce partially committed data on failure, creating records that are inconsistent and cannot be automatically reconciled without manual intervention.
database-design-operations.query-patterns.transactions-usedSee full patternReturning all columns from a user table — including `password_hash`, `stripe_customer_id`, and internal flags — in a general API response is an OWASP A02 data exposure pattern (CWE-200). Even when the fields are not sent to the client by the application, they consume network bandwidth and memory, and any serialization bug (an accidentally returned field, a `JSON.stringify` on the whole object) exposes sensitive internal data. For list views, fetching large `content` columns for 50 posts when only the title and summary are displayed is a performance problem at scale.
Why this severity: Low because the exposure requires an additional serialization mistake to reach clients, but the data is in memory and on the wire in a form that amplifies the impact of any adjacent bug.
database-design-operations.query-patterns.no-select-starSee full patternWithout a migration tool, schema changes are applied by hand through a database GUI or cloud console — changes that are invisible to version control, not reproducible across environments, and impossible to roll back systematically. When the production schema diverges from the development schema, bugs surface only in production. ISO 25010 recoverability requires that the system can be restored to a known state; without a migration history, the schema cannot be reliably reconstructed after a failure. Prisma's `db push` shortcut is specifically not a migration tool — it applies schema changes without creating a history file, meaning it cannot be used to reproduce or audit schema evolution.
Why this severity: High because schema changes without version control cannot be audited, replicated across environments, or rolled back — any schema divergence between environments causes hard-to-diagnose bugs.
database-design-operations.migration-versioning.migration-tool-presentSee full patternA migration that drops a column or changes a type without a preceding expand-contract sequence destroys data the moment it is applied to production. Even if the application code no longer reads the column, the data may be needed for compliance, analytics, or future recovery. ISO 25010 recoverability requires the system to be restorable after failures; a migration that irreversibly removes data violates this by eliminating the recovery option. Adding NOT NULL to a column that has existing NULL values causes the migration to fail mid-run, leaving the schema in a partially applied state on some databases.
Why this severity: High because irreversible migrations — drops, type narrowing, NOT NULL without backfill — cause immediate data loss or migration failures that require manual database surgery to resolve.
database-design-operations.migration-versioning.non-destructive-migrationsSee full patternMigrations without down functions (or a documented corrective-migration approach for Prisma) mean that a bad migration applied to production has no automated recovery path. The team must reconstruct the revert SQL under pressure during an incident, while the production database is in a broken state and users are affected. ISO 25010 recoverability requires a defined process for restoring normal operation; a migration system where every rollback is improvised from scratch does not meet this bar.
Why this severity: Medium because rollback gaps surface only during incidents, but when they do, the absence of a down migration turns a recoverable schema mistake into an extended outage.
database-design-operations.migration-versioning.rollback-capabilitySee full patternMigration files in `.gitignore` mean the team's schema history exists only on individual developers' machines and the production database. When a new developer clones the repo, they cannot reproduce the database schema. When the production database is lost, the schema cannot be reconstructed. ISO 25010 recoverability requires that schema history is independently reproducible; migration files in version control are the mechanism. Prisma's `migration_lock.toml` is especially critical — if gitignored, team members may apply migrations in inconsistent order and produce divergent schemas.
Why this severity: Medium because gitignored migration files make schema history irreproducible across team members and environments, causing divergence that produces silent, environment-specific bugs.
database-design-operations.migration-versioning.migrations-in-vcsSee full patternWithout a seed script, every `prisma migrate reset` during development requires manual recreation of test accounts, sample data, and edge-case records. New developers spend hours setting up a usable dev environment instead of writing code. More critically, automated tests that depend on database state cannot run reproducibly in CI without seed data, leading to test infrastructure that is too fragile to run in automated pipelines. ISO 25010 testability requires that the system can be placed into a known state for testing; a seed script is the minimal mechanism for achieving this.
Why this severity: Low because missing seed data degrades developer productivity and test reliability without causing runtime failures in production.
database-design-operations.migration-versioning.seed-data-availableSee full patternA hardcoded database connection string or password in source code is exposed to every developer with repo access, in every git clone, in CI logs, and in any code review tool. Once committed to git history, it is permanently visible even after removal — any attacker who ever had read access to the repo has the credential. CWE-798 (use of hard-coded credentials) and OWASP A07 both recognize this as a critical failure because the attacker does not need to compromise the server — they only need to access the repository. Credential rotation after exposure is the only remediation, and it requires a production deployment under time pressure.
Why this severity: Critical because a hardcoded credential in source code is permanently accessible to anyone with repository read access, including past collaborators and any platform that integrated with the repo.
database-design-operations.security-access.no-hardcoded-credentialsSee full patternMulti-tenant tables without Row-Level Security allow any authenticated user to read or modify other users' data by supplying a different ID in the request. In Supabase applications, tables without RLS policies are readable and writable by any authenticated session — there is no application-layer fallback. This is OWASP A01 (Broken Access Control) and CWE-284, and it is an IDOR (Insecure Direct Object Reference) vulnerability: an attacker iterates IDs and retrieves other users' posts, files, or messages. Application-level filtering applied inconsistently across routes has the same result — every unguarded route is a data breach vector.
Why this severity: High because missing row-level isolation allows any authenticated user to access or modify other users' data, constituting an IDOR vulnerability with direct GDPR and data breach implications.
database-design-operations.security-access.row-level-securitySee full patternAn application connecting to Postgres as the `postgres` superuser has unrestricted DDL access — it can drop tables, create new databases, and read any schema. If the application is compromised through a SQL injection vulnerability or a dependency supply-chain attack, the attacker has full database control. CWE-272 (least privilege violation) and OWASP A01 both require that each system component operates with the minimum permissions necessary. Using `service_role` in Supabase client-side code is the same failure pattern: RLS is bypassed entirely, and any client-side exploit has admin-level database access.
Why this severity: Medium because a superuser application connection amplifies the blast radius of any application-layer compromise to full database control, but exploitation still requires a secondary vulnerability.
database-design-operations.security-access.least-privilegeSee full patternPlatform-level encryption at rest protects the physical disk from theft but does not protect sensitive columns from a compromised database user or a SQL injection attack that reads data directly from tables. SSNs, government IDs, health diagnoses, and biometrics stored as plaintext TEXT columns are fully readable to any session with SELECT access. CWE-311 (missing encryption of sensitive data) and OWASP A02 both require that sensitive fields are protected at the application layer — the database encryption key and the application encryption key must be separately managed so that compromising one does not expose the other.
Why this severity: Medium because field-level encryption failure exposes the most sensitive personal data to any database-level attacker, but exploitation requires database access beyond typical application-level vulnerabilities.
database-design-operations.security-access.sensitive-data-encryptionSee full patternA database connection without SSL transmits queries, parameters, and result sets as cleartext on the network. Any host on the path between the application server and the database — a shared VPC, a cloud provider's internal network, a compromised router — can read and modify query traffic. This is CWE-319 (cleartext transmission of sensitive information) and OWASP A02. The failure mode is especially dangerous because the exposed data includes authentication credentials passed as query parameters and result sets containing all user PII stored in the database. `rejectUnauthorized: false` is equally dangerous: it encrypts the channel but does not validate the server certificate, leaving the connection open to man-in-the-middle substitution.
Why this severity: High because cleartext database connections expose all query traffic — including credentials and PII — to any network observer between the application and database hosts.
database-design-operations.security-access.connection-sslSee full patternThe most common backup failure mode is not a missing backup — it is discovering that backups exist but no one knows where they are, how long they are retained, or how to use them during an incident at 2 AM. ISO 25010 recoverability requires a defined, documented, and actionable recovery procedure, not just an assumption that the cloud provider is doing something. Managed databases like Supabase Free include only basic backups; teams that assume PITR is included on the free tier discover the gap after the incident. A documented backup strategy converts an implicit assumption into an auditable commitment.
Why this severity: High because undocumented backup strategies produce unknown recovery capability — the team cannot state their actual RPO or RTO, which is equivalent to having no recovery plan.
database-design-operations.backup-recovery.backup-strategy-documentedSee full patternDaily snapshot backups have a recovery point objective (RPO) of up to 24 hours — a catastrophic failure late in the day can erase a full day of transactions, orders, or user data. Point-in-time recovery compresses that window to minutes. ISO 25010 recoverability requires that the system can be restored to a recent-enough state to be operationally viable; for applications handling financial transactions, bookings, or communications, 24-hour data loss is not viable. Supabase Free tier provides only basic daily backups — many teams on free tier assume PITR is included until they need it.
Why this severity: Medium because PITR absence means up to 24 hours of data loss in a disaster, which is tolerable only for applications with no financial or time-sensitive records.
database-design-operations.backup-recovery.point-in-time-recoverySee full patternAn untested backup is not a backup — it is an assumption. Backup files can be incomplete, corrupt, or require a restore procedure the team has never practiced. ISO 25010 recoverability requires not just that backups exist, but that the restoration process works and can be executed under pressure. A team that discovers their restore procedure is broken during an active outage, while users are affected and stakeholders are demanding updates, has materially worse outcomes than a team that discovered and fixed the same gap during a scheduled test.
Why this severity: Medium because an untested restore procedure means the team has unknown recovery capability — the backup may be unusable when needed most, during an active incident.
database-design-operations.backup-recovery.restore-testedSee full patternA database password that has never been rotated since project creation has been exposed to every team member who ever had environment access, every CI system that logged the connection string, and any third-party service that consumed it. CWE-255 captures long-lived credentials as a vulnerability class: the longer a credential is in use without rotation, the larger the set of people and systems that have seen it. A defined rotation schedule limits the exposure window for any past compromise that has not yet been detected.
Why this severity: Low because credential rotation reduces long-term exposure risk but does not directly prevent active exploitation — its value is in limiting the damage of undetected past exposure.
database-design-operations.backup-recovery.credential-rotationSee full patternA health check that returns `{ ok: true }` without testing database connectivity provides false assurance. When the database is unreachable, load balancers and deployment platforms interpret the 200 response as healthy and keep routing traffic to an application that cannot serve requests. ISO 25010 availability requires that the system's operational status is accurately observable; a health endpoint that does not reflect the actual state of its dependencies does not meet this requirement. Deployments that wait for a health check to pass before routing traffic also rely on this — a DB-unaware health check will green-light a deployment on a broken database.
Why this severity: Medium because a DB-unaware health endpoint causes deployment platforms to route traffic to broken instances and prevents automated recovery from detecting database failures.
database-design-operations.monitoring-ops.health-check-endpointSee full patternWithout slow query logging, a query that degrades from 50ms to 5000ms as the database grows is invisible until users file support tickets or error rates spike. By then, the query has been running slowly for hours or days. ISO 25010 time-behaviour requires that performance characteristics are observable and measurable; a database layer with no query timing instrumentation provides no signal for performance regressions introduced by new code, schema changes, or data volume growth.
Why this severity: Low because slow query visibility affects detection speed rather than causing direct failures — performance degradation goes unnoticed longer, but the system continues to function.
database-design-operations.monitoring-ops.slow-query-monitoringSee full patternConnection pool exhaustion produces silent request failures — requests queue behind a saturated pool, timeout, and return errors — with no visibility into why. CWE-400 (uncontrolled resource consumption) applies when pool limits are reached without detection. Without pool metrics, the team discovers saturation only when users report errors, not proactively when the pool approaches its limit. ISO 25010 resource-utilisation requires that resource consumption is observable; connection pool health is a critical operational signal for scaling decisions and for diagnosing connection leaks introduced by code changes.
Why this severity: Info because pool monitoring improves operational visibility and incident detection speed but does not prevent failures — unmonitored pools still function until they are exhausted.
database-design-operations.monitoring-ops.connection-monitoringSee full patternRun this audit in your AI coding tool (Claude Code, Cursor, Bolt, etc.) and submit results here for scoring and benchmarks.
Open Database Design & Operations Audit