The 22 things we look for, the real-world consequence of each failing, and the Pro audit that goes deeper when it matters.
A committed `.env` file leaks every credential it contains to anyone with repo read access — and on public GitHub, to automated secret-scanners that crawl new commits within seconds (GitGuardian's 2024 State of Secrets Sprawl report recorded 23.8 million new secrets leaked on GitHub in 2023 alone). Database URLs, Stripe live keys, OAuth client secrets, and third-party API tokens routinely end up in these files. AI coding tools default to creating `.env` alongside `.env.example` when scaffolding, and will happily add the `.env` as a tracked file if the project doesn't already carry a gitignore entry covering it. The majority of "my secrets leaked" incidents on GitHub start exactly here: a single accidental commit, often by a contributor who didn't realize the file wasn't ignored. Once it lands in git history, removing it requires a history rewrite AND rotating every credential the file contained.
Why this severity: Critical because a committed `.env` hands over production credentials wholesale — one leaked file typically means rotating every service the project connects to, and public-repo exposure is scraped by attackers in minutes, not days.
project-snapshot.security.env-files-gitignoredSee full patternA `sk_live_`, `AKIA`, `ghp_`, or private-key literal inside committed source code is functionally identical to publishing the credential — git history is forever, and secret-scanning services (GitHub's built-in scanner, TruffleHog, GitGuardian) will find it faster than most teams can. The 2022 Uber breach began with a contractor's GitHub-leaked credential, and GitGuardian's 2024 State of Secrets Sprawl report logged 23.8 million fresh secrets pushed to GitHub across 2023. AI coding tools produce this anti-pattern routinely: asked to "add Stripe", they paste the example key from docs inline; asked to "connect to the database", they inline the connection string. Even `git rm` doesn't help — the blob stays reachable through history until a full rewrite plus credential rotation. Stripe, AWS, and GitHub all auto-revoke detected keys, so the loud failure mode is service outage; the quiet mode is an attacker finding it first and quietly using it.
Why this severity: Critical because a hardcoded key is already compromised the moment it hits git — there is no grace period, no mitigation short of rotation, and no way to claw the secret back from archived clones or cached views.
project-snapshot.security.no-hardcoded-secretsSee full patternA variable name like `NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY` is a direct contradiction in terms — the `NEXT_PUBLIC_` prefix tells the bundler to inline the value into every client bundle, and a service-role key grants RLS-bypassing database access. Shipping one to the browser gives any visitor full admin privileges against the database, including reading other users' rows, writing arbitrary data, and deleting tables. This is a documented pattern in the Supabase + Lovable ecosystem where service-role keys prefixed with `NEXT_PUBLIC_*` get inlined into every JavaScript bundle sent to browsers, where anyone can extract them via devtools. It shows up constantly in AI-generated Supabase starters because the tool sees `SUPABASE_` env vars in docs and reaches for the public prefix out of pattern-matching habit, not understanding the access-tier distinction between anon and service-role keys. The variable name alone is enough signal to flag — intent doesn't matter when the prefix dictates build-time behavior.
Why this severity: Critical because the naming pattern guarantees client-bundle exposure on every deploy, and a leaked service-role key produces immediate full database compromise accessible from any visitor's devtools.
project-snapshot.security.no-service-role-in-public-bundleSee full patternA Supabase table without row-level security is readable and writable by anyone who has the anon key — and the anon key ships in every client bundle by design. An attacker who opens devtools on the login page can pull the JWT, hit the REST endpoint directly, and dump or mutate the entire table. Firebase/Supabase misconfiguration incidents catalogued at appsecurity.net show a recurring pattern where RLS is either disabled outright or uses a permissive `USING (true)` policy, handing every table's contents to anyone with the anon key (which ships in the client bundle). AI coding tools routinely create tables with the Supabase CLI or dashboard without emitting the `ENABLE ROW LEVEL SECURITY` statement and without writing policies, because the default `CREATE TABLE` syntax doesn't require it. This is the single most common cause of public Supabase data leaks — the pattern has been behind dozens of disclosed incidents where customer emails, private messages, and payment records were scraped from production anon endpoints.
Why this severity: Critical because a disabled RLS policy or a `USING (true)` policy exposes every row to unauthenticated reads and writes through the public anon key shipped in every client bundle.
project-snapshot.security.supabase-rls-enabled-with-real-policiesSee full patternAn unauthenticated API route that reads or writes user data is a direct broken-access-control vulnerability — any visitor with the URL can enumerate profiles, export other tenants' records, or mutate state belonging to accounts they don't own. Optus (Australia) lost 10 million customer records through an unauthenticated API endpoint in 2022, triggering an AU$50M+ class action and regulatory action from the Australian government; T-Mobile disclosed a 37-million-customer breach in 2023 via a similar API-side failure. AI coding tools routinely produce this failure mode because they scaffold route handlers from example snippets that skip the session check, and because request-body validation visually looks like "protection" even though it enforces shape, not identity. The same pattern also catches webhook endpoints that accept arbitrary POST payloads without verifying the provider signature, which lets anyone forge subscription upgrades, payment confirmations, or delivery events. This is the single most common critical finding in post-incident reviews of vibe-coded apps.
Why this severity: Critical because an unauthenticated data route is directly exploitable by any visitor with the URL — no credentials, tooling, or chained vulnerabilities required — and typically exposes multi-tenant data in bulk (Optus, T-Mobile, and dozens of smaller breaches have originated exactly here).
project-snapshot.security.api-routes-require-authSee full patternAI coding tools love the Next.js parens-prefix convention — they will happily scaffold `app/(authenticated)/dashboard/page.tsx` or `app/(app)/settings/page.tsx` and then forget that the parenthesized segment is purely a routing grouping, not a security boundary. The directory name carries no runtime effect; if the page file itself never calls `auth()`, `getServerSession()`, or `supabase.auth.getUser()`, anyone who guesses the URL gets the page rendered, HTML streamed, and server-component data leaked. Facebook's 2018 token-flaw breach exposed 50 million accounts through a similar gap between "this route is supposed to be protected" and "this route actually checks the session server-side." OWASP ranks Broken Access Control as the #1 web risk in its 2021 Top 10 precisely because this class of failure is this easy to ship. The failure is especially cruel because the app feels secure during development — the team tests logged-in, sees the dashboard, ships. The anonymous-traffic path is never exercised until an attacker finds it.
Why this severity: Critical because an unguarded protected-group route streams private user data and admin surfaces to anonymous requests — a complete access-control failure with zero mitigation until the session getter is wired in.
project-snapshot.security.protected-routes-call-session-getterSee full patternInsecure Direct Object Reference (IDOR) is the most-exploited class of API vulnerability in the modern web and the top entry on the OWASP API Security Top 10 (API1:2023 — Broken Object Level Authorization). The pattern is almost always the same in AI-generated code: a route at `/api/orders/[id]/route.ts` reads `params.id` and runs `db.order.findUnique({ where: { id: params.id } })` without adding a `user_id = session.user.id` predicate. The route authenticates the requester but never authorizes them against the resource, so any logged-in user can iterate IDs and harvest strangers' orders, messages, invoices, or medical records. Optus suffered a catastrophic 10-million-customer breach in 2022 that traced back to exactly this pattern: guessable contact IDs, no ownership predicate. Snapchat's 2014 "Find Friends" leak exposed 4.6 million phone numbers the same way. AI tools produce this reliably because the naive happy-path code ("read the row matching the URL id") works fine when the developer is the only user in the test database.
Why this severity: Critical because any authenticated user can trivially enumerate IDs and exfiltrate every other user's private data — no exploit chain needed, just a loop that increments the URL parameter.
project-snapshot.security.object-level-access-controlSee full patternThe single most common AI-coding-tool security-theater failure: Zod, Yup, Joi, Valibot, or ArkType is added to `package.json`, a `schemas.ts` file is populated with beautiful `z.object({ ... })` definitions, and then nothing ever actually calls `.parse()`, `.safeParse()`, `.validate()`, or `.assert()` at runtime. Route handlers read `await req.json()` and pipe the raw body straight into a database write, an external API call, or an email template. The schema exists only as a type alias via `z.infer<typeof X>`. The code LOOKS validated — reviewers see the schema file, the types line up, the IDE autocompletes — but the wire is wide open. OWASP A03 (Injection) and A04 (Insecure Design) are the direct mappings: without a runtime parse, mass-assignment attacks (sending `{ email, role: "admin" }` to a signup endpoint), prototype pollution via crafted payloads, type-coercion exploits, and stored XSS via unsanitized text all reach your data layer unfiltered. Cursor and v0 produce this anti-pattern by default when asked to "add Zod" — they scaffold the schema and stop there.
Why this severity: Critical because dangling schemas provide zero runtime protection — every mutation endpoint that appears validated is actually wide open, and the attack surface spans every OWASP injection and mass-assignment class simultaneously.
project-snapshot.security.validation-schemas-have-runtime-useSee full patternWhen an AI coding assistant is prompted to "add security" or "rate-limit this API" it will reliably install `helmet`, `express-rate-limit`, `csurf`, `csrf-csrf`, or a CORS package, import the module at the top of a file, and then forget to actually register the middleware with the app. The package shows up in `package.json`, the import statement satisfies the grep-for-safety review instinct, and the code LOOKS defended — but without an `app.use(helmet())`, `app.use(limiter)`, `fastify.register(cors)`, or equivalent registration, none of the middleware runs on any request. Every response ships with no security headers, every endpoint accepts unbounded traffic, every state-changing POST is vulnerable to CSRF. OWASP A05 (Security Misconfiguration) is the exact mapping. This pattern is especially insidious because it passes visual code review — a human glance sees "imports helmet, imports rate-limit, looks fine" — and only deep inspection of the app-initialization code reveals that none of the middleware chain fired.
Why this severity: High because every unapplied middleware represents a missing control that the code pretends to have — attackers get XSS-prone responses, unlimited request rates, and unprotected state mutations while reviewers believe the app is hardened.
project-snapshot.security.security-middleware-appliedSee full patternThree response headers carry most of the browser-enforced defenses that make modern web apps resistant to common attacks: `Strict-Transport-Security` (HSTS) prevents TLS-strip downgrades, `Content-Security-Policy` (CSP) caps the blast radius of any XSS bug, and `Referrer-Policy` stops URLs containing tokens or session identifiers from leaking to third parties in referrer strings. AuditBuffet's own production telemetry across 37 first-run `security-headers` audits shows HSTS missing in 76% of scans, CSP missing in 65%, and Referrer-Policy missing in 54% — the majority of AI-generated sites ship with all three absent. The attacker's path is concrete: without HSTS, a user on a hostile Wi-Fi network can be downgraded to HTTP and have their session cookie stolen; without CSP, a single reflected-XSS bug becomes a full account-takeover via arbitrary script execution; without Referrer-Policy, password-reset links, magic-login tokens, and OAuth callbacks leak into analytics pipelines and third-party ad networks. GDPR Article 32 (appropriate technical measures) treats referrer leakage of personal identifiers as a reportable data incident.
Why this severity: High because each missing header leaves a distinct browser-level defense un-enforced — the composite exposure spans network-layer downgrade attacks, XSS amplification, and session-token leakage through referrers simultaneously.
project-snapshot.security.security-headers-presentSee full patternA handful of JavaScript APIs parse strings as live code or live HTML — `dangerouslySetInnerHTML`, `eval(...)`, `new Function(...)`, `document.write(...)`, and direct `element.innerHTML = ...` assignment. Each one turns a string into executable behavior. When the string originates from user-controllable input — a request body, URL parameter, cookie, or database row that was itself populated from user input — and reaches the sink without sanitization, the result is stored or reflected XSS, arbitrary script execution, and in server contexts potential RCE via `new Function()`. The 2018 British Airways Magecart attack skimmed 380,000 customer payment cards through exactly this pattern: an injected script reached a rendering sink that trusted its input. The Information Commissioner's Office fined BA £20 million under GDPR Article 32. OWASP ranks Injection as the #3 web risk in its 2021 Top 10. AI coding tools produce this pattern whenever they scaffold a "rich text preview", a "markdown renderer", or a "dynamic expression evaluator" without wiring in a sanitizer — the happy path works, the attack path is invisible until exploited.
Why this severity: Critical because a single path from user input to a live-HTML or live-code sink produces client-side XSS (session-cookie theft, account takeover) or server-side RCE — the full Magecart / supply-chain-compromise playbook in a single unsanitized line.
project-snapshot.security.dangerous-sinks-not-fed-user-contentSee full patternLoading PostHog, Google Analytics, Segment, Mixpanel, or Hotjar before an EU/UK visitor clicks "accept" violates GDPR Art. 7 and the ePrivacy Directive — France's CNIL fined Google €150M and Facebook €60M in 2022 for non-compliant cookie consent, the EDPB added a €390M fine against Meta in 2023, and EU courts have held that cookies set before user opt-in on any tracking script are per-se unlawful under the ePrivacy Directive. AI coding tools ship the classic failing shape: a `posthog.init()` or `<Script src="googletagmanager.com/...">` dropped into `app/layout.tsx` during setup with no consent gate wired in, because the assistant knew how to install the SDK but not the regulatory dependency. The user-facing failure is silent — the site works, cookies get dropped, and the liability accrues until a complaint or audit surfaces it. California CPRA, Quebec Law 25, and Brazil's LGPD impose parallel prior-consent requirements.
Why this severity: High because the exposure scales per-visitor and regulators have issued nine-figure fines (CNIL €150M v. Google, EDPB €390M v. Meta) specifically for tracking-before-consent — the remediation is narrow but the pre-fix exposure compounds with every page load.
project-snapshot.legal.cookie-consent-before-trackingSee full patternScreen readers announce an image with no `alt` attribute by reading the filename or the word "graphic" — a photo of your CEO becomes "IMG_4782.jpg" and a chart becomes "unlabeled graphic." Missing alt text is direct plaintiff's-bar lawsuit bait: Domino's v. Robles (2019 SCOTUS denied cert) and Gil v. Winn-Dixie ($250K plus $4M in ordered remediation) both turned on content-image accessibility, and Seyfarth Shaw tracks roughly 3,600 federal ADA website lawsuits filed annually with about 75% citing missing alt text as a primary claim. Users relying on assistive tech lose the information the image was carrying, which on transactional pages (product photos, verification checks, captcha alternatives) blocks task completion entirely. AI coding tools are particularly prone to this failure because `next/image` and `<img>` render without `alt` (it is an optional prop, not a required one), and generated React components routinely ship with the attribute absent. This check also catches the opposite failure: a codebase where every image carries `alt=""`, meaning the model defaulted everything to decorative rather than writing real descriptions. Missing alt text is a documented WCAG 2.2 Level A failure (SC 1.1.1) and the single most common complaint captured in accessibility support tickets.
Why this severity: Critical because missing `alt` on content images is the #1 claim in ~3,600 annual federal ADA website lawsuits tracked by Seyfarth Shaw — the exposure is an active, industrialized plaintiff's bar, not a theoretical risk.
project-snapshot.legal.images-have-alt-textSee full patternAn `<input>` without `<label htmlFor>`, `aria-label`, or a wrapping `<label>` parent is announced by screen readers as "edit text" with no indication of what the field is for — on a checkout or signup form this turns every submission into a guessing game and drives abandonment. The Target v. NFB $6M settlement cited form-label failures as a lead claim, and unlabeled form fields sit as the #2 most common claim in the ~3,600 annual federal ADA website lawsuits tracked by Seyfarth Shaw (#1 is missing alt text). The most common anti-pattern is using the `placeholder` attribute as the sole label, which visually looks labeled to a sighted developer but disappears on focus, provides no accessible name, and forces users with cognitive or memory impairments to delete their input just to re-read the prompt. AI coding tools produce this failure constantly because modern Tailwind-shaped UI snippets often omit `<label>` in favor of placeholder text for visual compactness, and because form libraries that wire up labels via `id` wiring frequently lose the wiring when a model refactors a form. Browser autofill and password managers also rely on labels to decide what to autofill, so unlabeled fields quietly break password saving.
Why this severity: High because form-label failures drove the Target v. NFB $6M settlement and rank as the #2 claim in ~3,600 annual federal ADA website lawsuits — the exposure is an active plaintiff's-bar target, not theoretical.
project-snapshot.legal.form-inputs-have-labelsSee full patternGDPR Art. 17 ("right to be forgotten") and CCPA §1798.105 both require that a user who asks for their account and personal data to be deleted actually gets it deleted — not hidden behind a `deletedAt` timestamp while the email, name, and history sit indefinitely in the database. CNIL fined Clearview AI €20M in 2022 in part for failing to act on erasure requests, and ICO enforcement routinely issues multi-million-pound penalties on the same grounds. Beyond regulatory exposure, the Apple App Store and Google Play Store have both mandated in-app account deletion since 2022: apps that gate deletion behind a support-email workflow get rejected at review. AI coding tools reliably scaffold signup, login, and password reset, then skip the deletion route entirely — because "how do I build auth" is a well-represented training pattern and "how do I build GDPR-compliant erasure" is not. The quiet failure mode is a live app that passes every happy-path test while accumulating unlawful retention exposure with every new user.
Why this severity: Critical because missing or stub-only deletion is a direct ongoing Art. 17 violation accruing exposure per active user per day, blocks App Store and Play Store approval outright, and cannot be patched retroactively against users who already asked and were ignored.
project-snapshot.legal.account-deletion-codedSee full patternGDPR Art. 20 gives every EU user a statutory right to receive a structured, machine-readable copy of the personal data you hold about them, and CCPA §1798.110 gives California users a parallel right-to-know. Both have a one-month (GDPR Art. 12) response clock that starts the moment a request lands in a support inbox. Without a working export endpoint the only way to satisfy a portability request is a panic-mode manual database dump — which regulators treat as a risk factor in its own right, because ad-hoc dumps leak data they should not include. The GDPR fine ceiling for Art. 20 non-compliance is 4% of global turnover or €20M; CCPA AG enforcement has reached $7,500 per affected consumer. AI coding tools reliably build login and profile-edit flows, but rarely scaffold a `/api/me/export` route, because training corpora skew heavily toward CRUD-happy-path code and away from regulator-facing plumbing. The quiet failure mode is a live site that looks compliant until the first SAR lands and nobody knows where to point it.
Why this severity: High because missing export directly blocks Art. 20 / §1798.110 compliance on a one-month regulator clock, scales liability per-user, and cannot be handwaved as "best effort" — the obligation is binary.
project-snapshot.legal.data-export-endpoint-existsSee full patternCCPA §1798.135(a)(1), as strengthened by the 2023 CPRA amendment, requires a literal-text "Do Not Sell or Share My Personal Information" link (or the state-approved alternate "Your Privacy Choices" link with the official blue-white icon) in the footer of any site that "sells or shares" personal data — and under CPRA, routing any identifier (IP, device ID, cookie ID) to ads, analytics, or retargeting counts as "sharing." Sephora paid $1.2M to the California AG in 2022 for missing exactly this link, and the Connecticut, Colorado, Virginia, and Utah parallel statutes each layer additional $7,500-per-violation exposure on top. AI coding tools reliably bolt Google Analytics, PostHog, Meta Pixel, or TikTok Pixel into a Next.js layout — because "add analytics" is a two-line copy-paste — but almost never scaffold the matching opt-out link, because the obligation only triggers once the pixel is present. The quiet failure mode is a site that successfully tracks California visitors while silently accruing per-visitor violations.
Why this severity: High because California AG enforcement has shown a clear willingness to settle for seven figures on the missing-link vector alone, the exposure scales linearly with California visitor volume, and parallel CPA / CTDPA / CDPA regimes compound it.
project-snapshot.legal.do-not-sell-or-opt-out-linkSee full patternA `console.log(user)`, `console.log(req.body)`, or `logger.info({ email, password })` inside a route handler ends up durably stored in Vercel Functions logs, CloudWatch, Datadog, Better Stack, or Sentry for days to months — searchable by every engineer, support agent, and third-party log processor with access, and harvestable in any breach of the logging pipeline itself. Slack (2023) leaked employee credentials through Sentry when raw auth objects were captured on exception; the T-Mobile 2023 class action ($350M) explicitly cited logged PII as a breach-scope amplifier. Under GDPR Art. 32 ("security of processing"), logging passwords, tokens, session IDs, or unredacted PII into a shared log pipeline is a processing-security failure whether or not the logs leak — and once they do, Art. 33-34 breach-notification timelines start. AI coding tools default to `console.log(req.body)` and `console.log(user)` during debugging and routinely leave those statements in on ship, because `grep console.log` is not part of the "ready to deploy" checklist that training data reinforces.
Why this severity: High because logged credentials are an immediate live credential-theft vector against the logging pipeline itself, and widen any downstream breach scope dramatically — small code change, disproportionate exposure reduction.
project-snapshot.legal.no-pii-in-server-logsSee full patternFor an online Terms of Service to be enforceable against a user, US courts require that the link to the terms be reasonably conspicuous at the point of assent — Nguyen v Barnes & Noble, Inc. (9th Cir. 2014) threw out arbitration because a footer-only "Terms of Use" link with no direct prompt was held insufficient, and Meyer v Uber (2d Cir. 2017) spelled out the visibility and placement requirements that have been applied ever since. A missing Terms link means your liability caps, arbitration clause, and choice-of-law provisions are likely unenforceable in a dispute. Apple App Store, Google Play Store, Stripe activation, Google OAuth verification, and Meta login all additionally require a public-facing Privacy Policy URL — apps without one get rejected, de-listed, or blocked from going live. AI coding tools reliably scaffold `app/page.tsx` and a CTA button, then skip `app/terms/page.tsx` and `app/privacy/page.tsx` entirely, or leave the routes as 404 stubs that never got written.
Why this severity: Medium because the missing pages and links block real downstream gates (Stripe activation, store review, OAuth verification, clickwrap enforceability) but remediation is mechanical — two template pages plus two footer links.
project-snapshot.legal.terms-and-privacy-linked-from-footerSee full patternA webhook handler that calls `req.json()` without verifying the signature will happily accept a forged payload from any attacker who can POST to the URL — and the URL is discoverable by scanning the site, reading the JavaScript bundle, or checking the provider dashboard's public integration docs. The forged `payment_succeeded`, `subscription_created`, or `checkout.session.completed` event then flows through the handler's normal happy-path logic, granting entitlements, marking orders paid, or corrupting records. AI coding tools scaffold webhook handlers without signature verification almost universally, because the end-to-end flow works fine during development: the real provider sends a real event, the handler processes it, the test passes. Stripe's own security documentation explicitly flags signature-skipping as the single most common webhook vulnerability, and the same pattern applies to Supabase auth webhooks, GitHub webhooks, and every other provider that signs its outbound requests.
Why this severity: High because a forged webhook event directly grants attackers whatever the handler grants — unauthorized product, unpaid subscriptions, free credits, or record tampering — at the cost of a single unsigned HTTP POST from anywhere on the internet.
project-snapshot.abuse.webhook-signature-verifiedSee full patternAuthentication endpoints without rate limits are a credential-stuffing accelerant: attackers replay leaked username/password pairs from prior breaches against the `/login` endpoint at ten thousand requests per second from a distributed botnet, and any account whose user reuses a password is compromised. 23andMe's October 2023 breach — 6.9 million users exposed, followed by an FTC settlement and class-action payouts — was a pure credential-stuffing attack that succeeded specifically because login attempts were not rate-limited. Signup endpoints are equally risky: unrate-limited signup enables automated account-creation to harvest free trials, spam the platform, or warm up accounts for later abuse. Password-reset endpoints without rate limits enable enumeration (probing whether an email exists by reading response timing) and SMS/email-cost pumping. AI coding tools omit rate limits on auth endpoints almost universally, because the happy-path flow works fine with a single request.
Why this severity: High because credential-stuffing at scale produces real account takeovers and direct regulatory liability (FTC, state AG actions, GDPR breach reporting), and the remediation is cheap while the exposure compounds over time as breach corpora grow.
project-snapshot.abuse.rate-limit-on-auth-endpointsSee full patternStripe retries webhook deliveries on any non-2xx response or timeout, and the retry schedule extends across up to 3 days of exponential backoff for a single event. Supabase, GitHub, Resend, and nearly every other webhook-emitting platform follow the same "retry until ACK" contract. Without an idempotency check, each retry causes your handler to reprocess the same event: users get double-charged, subscriptions grant the same product twice, counters advance twice, confirmation emails arrive twice, and audit logs record ghost transactions. This is the canonical "discovered 30 days after launch when a brief outage causes a Stripe retry storm" footgun. AI coding tools scaffold webhook handlers that work perfectly in happy-path testing — one event in, one database write out — and completely miss the retry semantics that only appear in production under load or during provider-side delivery delays. Stripe's documentation warns about this explicitly under "Handle events asynchronously," but the warning doesn't survive the scaffolding transplant.
Why this severity: Medium because the failure mode requires an actual retry event (not constant) but when it fires it causes real financial damage (double-charges, duplicate grants, corrupted counters) and is typically discovered only after customer complaints surface in production.
project-snapshot.abuse.webhook-idempotencySee full patternA list endpoint with no LIMIT, no pagination, and no maximum-rows cap is a denial-of-service vector disguised as a feature. One user requests `/api/users` or `/api/messages`, the handler issues `SELECT * FROM users` with no LIMIT clause, the database returns a million rows, the serialized JSON response is two hundred megabytes, Postgres memory spikes, connection pools exhaust, and the entire application goes down for every other user until someone notices. Shopify's and Figma's engineering blogs both have postmortems on this exact shape — high-cardinality list endpoints without pagination discipline are consistently in the top three P0 outage patterns in production Postgres shops. AI coding tools scaffold `SELECT * FROM table` idioms that work perfectly on seed data with 10 rows and fail catastrophically at production scale, especially when the table grows linearly with user activity (messages, notifications, audit logs, session records). Any `/api/*` endpoint backed by an unbounded query is a latent outage trigger waiting for one motivated or malicious user to notice.
Why this severity: Medium because the failure is probabilistic (requires someone to actually hit the endpoint against a populated table) but when it fires it takes down the entire app for every user and can be deliberately triggered by any anonymous or authenticated attacker.
project-snapshot.abuse.unbounded-list-queriesSee full patternAI scaffolding routinely generates `/api/checkout` handlers that accept `amount`, `price`, or `priceId` from the client request body and pass the value straight into `stripe.checkout.sessions.create(...)`, `paddle.transactions.create(...)`, `lemonSqueezy.checkouts.create(...)`, `polar.checkouts.create(...)`, or `braintree.transaction.sale(...)`. Any buyer can open browser dev tools, rewrite the outbound request, and pay $0.01 for a $999 product — the handler dutifully forwards the tampered number to the processor, which has no idea the merchant's intended price was different. This pattern is one of the most common indie-SaaS exploits reported on Twitter/X and in Shopify-app vulnerability disclosures since 2019; Stripe's own `Checkout.sessions.create` documentation opens with an explicit warning against it. The fix is trivial (look up the price server-side from a catalog or validated allowlist), but the LLM default — "accept what the client sent, pass it through" — produces the broken shape every time unless the prompt specifies server-side derivation. Once the site has real traffic, automated scrapers find mispriced checkouts within days.
Why this severity: Critical because every successful exploit is a direct revenue loss that compounds per sale, and attackers can script unlimited $0.01 purchases across the entire catalog in minutes.
project-snapshot.abuse.payment-amount-server-side-onlySee full patternServer-side `console.log(req.body)`, `logger.info({ user, session })`, or `Sentry.captureException(err, { extra: { headers } })` calls that dump full request bodies, card numbers, CVVs, session tokens, or API keys have two compounding cost consequences beyond privacy exposure. First, PCI scope expansion — the moment a credit-card field lands in a log line, the log ingestion pipeline (Datadog, CloudWatch, Loki, Sentry, Logtail) becomes "in-scope" for PCI-DSS audits, adding roughly $50K-$200K per year in compliance overhead for QSA assessments, retention encryption, and access-control review. Second, abuse-replay blast radius — anyone with log read access (your ops team, Sentry seat-holders, a compromised vendor) can replay sessions, hijack tokens, or reuse cards. The 2022 Twilio incident exposed years of customer PII because log infrastructure was compromised; Capital One's 2019 IAM-misconfigured S3 logs resulted in 100M records leaked. PCI-DSS Req 3.2 explicitly prohibits storing authentication data post-authorization in any unprotected location, including logs. AI scaffolding produces this pattern casually: "add logging" becomes `console.log(req.body)` without a redactor in the loop.
Why this severity: High because one tainted log pipeline can expand PCI scope by hundreds of thousands of dollars per year and any log reader effectively becomes an auth principal.
project-snapshot.abuse.no-pii-in-server-logsSee full patternStack Scan runs these 22 checks in your AI coding tool. No signup, no credit card, no code uploaded. You get a score, the failing checks, and the exact Pro audits to run next.
Open Stack Scan