A list endpoint with no LIMIT, no pagination, and no maximum-rows cap is a denial-of-service vector disguised as a feature. One user requests /api/users or /api/messages, the handler issues SELECT * FROM users with no LIMIT clause, the database returns a million rows, the serialized JSON response is two hundred megabytes, Postgres memory spikes, connection pools exhaust, and the entire application goes down for every other user until someone notices. Shopify's and Figma's engineering blogs both have postmortems on this exact shape — high-cardinality list endpoints without pagination discipline are consistently in the top three P0 outage patterns in production Postgres shops. AI coding tools scaffold SELECT * FROM table idioms that work perfectly on seed data with 10 rows and fail catastrophically at production scale, especially when the table grows linearly with user activity (messages, notifications, audit logs, session records). Any /api/* endpoint backed by an unbounded query is a latent outage trigger waiting for one motivated or malicious user to notice.
Medium because the failure is probabilistic (requires someone to actually hit the endpoint against a populated table) but when it fires it takes down the entire app for every user and can be deliberately triggered by any anonymous or authenticated attacker.
Enforce a hard row cap on every list query: .limit(Math.min(userLimit ?? 50, 500)) in Supabase/Drizzle/Kysely, take: Math.min(userLimit ?? 50, 500) in Prisma, LIMIT 500 in raw SQL. Implement cursor-based pagination (keyset on created_at, id) for stable ordering at scale. Reject user-supplied limit values above the cap rather than silently clamping so API consumers learn the constraint. See the api-security Pro audit for pagination contracts, cursor-token signing, and list-endpoint DoS budgets.
project-snapshot.abuse.unbounded-list-queriesmediumapp/api/**/route.ts, pages/api/**, server actions, and tRPC procedures. For each, confirm a VISIBLE bound in the same file as the query: LIMIT N (raw SQL), .limit(N) (Supabase/Drizzle/Kysely/Mongoose), .range(from, to) (Supabase), take: N (Prisma). If the handler accepts ?limit= / ?page= / ?per_page= / ?cursor=, the value must be server-clamped via Math.min(Number(limit) || DEFAULT, MAX) before reaching the query.?limit= is explicitly clamped via Math.min(userLimit, MAX). Default applied when ?limit= is absent.LIMIT/take/range/.limit() in the handler file. User-supplied ?limit= passed through unclamped (take: Number(searchParams.get('limit')) is a fail — ?limit=1000000 is honored). Reliance on a where: { tabId } / where: { userId } predicate for "natural bounding" with no explicit cap — a single tenant can still accrue tens of thousands of rows. Math.min(userLimit, 1_000_000) is not a meaningful clamp./api/* endpoints — only single-record routes like /api/users/[id]. Quote the directory walk."/api/users: .limit(50) default; /api/messages: Math.min(userLimit, 200) + cursor-paginated; /api/orders: .range() with server-clamped window"."<path> issues <query> with no LIMIT and no clamp on user-supplied limit; table <table> can grow unbounded" (≤500 chars).api-security.const MAX_LIMIT = 500;
const DEFAULT_LIMIT = 50;
export async function GET(req: Request) {
const { searchParams } = new URL(req.url);
const limit = Math.min(Number(searchParams.get('limit')) || DEFAULT_LIMIT, MAX_LIMIT);
const cursor = searchParams.get('cursor');
const q = supabase.from('messages').select('*').order('created_at', { ascending: false }).limit(limit);
if (cursor) q.lt('created_at', cursor);
const { data } = await q;
return Response.json({ data, nextCursor: data?.length === limit ? data[data.length - 1].created_at : null });
}
Prisma equivalent: prisma.message.findMany({ take: Math.min(limit, 500), cursor: cursor ? { id: cursor } : undefined, skip: cursor ? 1 : 0 }).