All 22 checks with why-it-matters prose, severity, and cross-references to related audits.
Serving legacy JPEG or PNG images without optimization adds hundreds of kilobytes to every page load that could be eliminated with no visual quality loss. WebP typically cuts image weight by 25–34% over JPEG, and AVIF by 50% over JPEG at equivalent quality — directly affecting Largest Contentful Paint (LCP), a Core Web Vitals metric Google uses for search ranking. On mobile connections, unoptimized images are the single largest contributor to slow page loads and high bounce rates. ISO 25010 performance-efficiency.resource-utilization classifies this as a resource waste that degrades end-user experience at scale.
Why this severity: High because legacy-format content images consistently account for the largest avoidable payload on web pages, directly degrading LCP scores and search ranking.
performance-load.images.image-formatSee full patternImages without explicit `width` and `height` attributes cause Cumulative Layout Shift (CLS), one of three Core Web Vitals metrics. When a browser downloads an image and discovers its dimensions, it reflows surrounding content — shifting text and buttons mid-read. Google's CWV assessment penalizes pages with CLS above 0.1, and users who experience unexpected layout shifts are significantly more likely to abandon a page. This is a direct business impact: form submissions and purchase buttons that shift away from the user's tap are missed conversions.
Why this severity: High because Cumulative Layout Shift caused by dimensionless images is a Core Web Vitals failure that directly harms search ranking and user retention.
performance-load.images.image-sizingSee full patternEagerly loading below-fold images forces users to download content they may never scroll to — consuming bandwidth, extending Time to Interactive, and degrading performance on mobile or metered connections. On pages with galleries, comment sections, or long lists of testimonials, below-fold images can represent 60–80% of total image payload. This wasted transfer directly increases LCP for above-fold content, since the browser competes for bandwidth across all eager loads. ISO 25010 performance-efficiency.time-behaviour captures this as unnecessary resource acquisition that delays the user's first meaningful interaction.
Why this severity: Medium because below-fold eager loading wastes bandwidth and increases LCP indirectly, but does not prevent the page from being usable once loaded.
performance-load.images.lazy-loadingSee full patternA 2400px-wide hero image displayed at 600px on mobile forces the device to download and decode four times the necessary pixel data — typically 3–8x the file size of an appropriately sized image. On 4G, this adds 300–800ms of avoidable load time; on 3G it can exceed 2 seconds. Mobile users represent over 60% of web traffic globally, and delivering desktop-sized assets to them is a direct resource waste that ISO 25010 performance-efficiency.resource-utilization identifies as disproportionate consumption. `srcset` is the standard browser mechanism for this, and frameworks like Next.js automate it completely.
Why this severity: Low because most browsers on modern connections absorb the cost without visible failure, but the waste accumulates at scale across millions of mobile visits.
performance-load.images.responsive-imagesSee full patternA product card image with 2000px intrinsic width displayed at 400px wastes 97% of the downloaded pixels and commonly adds 400KB–2MB of avoidable payload. Multiply this across a catalog page with 12 products and users are downloading 5–24MB of image data that serves no visual purpose. Beyond raw bandwidth, the browser must decode every pixel before painting — a CPU-intensive operation that extends Time to Interactive and drains mobile batteries. ISO 25010 performance-efficiency.resource-utilization classifies this pattern as disproportionate resource consumption relative to the output produced.
Why this severity: Low because oversized images hurt performance without breaking functionality, but the cumulative payload cost is significant on image-heavy pages.
performance-load.images.no-oversized-imagesSee full patternA JavaScript bundle exceeding 250KB gzipped takes 2–5 seconds to parse and execute on a mid-range Android device, even on a fast connection — because parse time is CPU-bound, not network-bound. Google's research shows that 100KB of JavaScript takes ~1 second of CPU time on a median mobile device; a 500KB bundle costs 5 seconds before any user interaction is possible. This directly fails the Core Web Vitals Interaction to Next Paint (INP) threshold. CWE-770 (Allocation of Resources Without Limits) applies: shipping an unbounded bundle is a resource management failure that degrades every user's experience proportionally to device age.
Why this severity: Critical because oversized bundles cause multi-second parse delays on mobile devices before any interaction is possible, directly failing Core Web Vitals INP thresholds.
performance-load.bundle.no-large-bundlesSee full patternWithout route-based code splitting, every user downloads the JavaScript for every page in the application — including admin panels, dashboard views, and features they will never visit. A monolithic bundle of 800KB means a user loading the marketing homepage pays the parse cost for the checkout flow and settings page. Next.js, SvelteKit, and Nuxt split routes automatically; overriding this behavior or building a custom SPA without dynamic imports forces that full cost onto every visitor. ISO 25010 performance-efficiency.time-behaviour requires that system response time be proportionate to the work being done — loading unused code violates this.
Why this severity: High because a monolithic bundle without code splitting forces all users to parse and execute code for routes they never visit, adding 500ms–3s of unnecessary CPU time.
performance-load.bundle.code-splittingSee full patternBarrel files that re-export every module in a directory prevent the build tool from eliminating dead code — even when you only import one function, the bundler may include the entire barrel. A `lib/index.ts` that re-exports 50 utilities causes every importer to pull in all 50, regardless of usage. Without `sideEffects: false` in `package.json`, the bundler cannot safely drop any of those exports. On large codebases, this pattern adds tens to hundreds of kilobytes to the bundle that serve no user-visible function. ISO 25010 performance-efficiency.resource-utilization identifies this as avoidable resource inclusion.
Why this severity: Low because tree-shaking issues rarely cause visible failures but silently add unnecessary payload, compounding with other bundle-size problems.
performance-load.bundle.tree-shakingSee full patternUnused production dependencies are dead weight with live consequences: they bloat the bundle, slow install times, expand the attack surface, and create maintenance debt for packages that will never receive targeted security updates. `moment` (60KB gzipped), full `lodash` (70KB gzipped), and `date-fns` without tree-shaking are common culprits found installed for a single date formatting call. Beyond bundle size, ISO 25010 performance-efficiency.resource-utilization counts every unnecessarily included module as a resource waste — and npm audit flags transitive vulnerabilities in packages you don't actually use.
Why this severity: Medium because unused large dependencies directly add payload to the bundle and expand the vulnerability surface area, even when the package itself works correctly.
performance-load.bundle.no-unused-dependenciesSee full patternA charting library like Recharts (180KB gzipped) or a rich text editor like Monaco (2MB) statically imported in a shared layout loads on every page — even the marketing homepage, authentication screens, and error pages that never render a chart or editor. This adds hundreds of milliseconds of unnecessary parse time to routes where the library is invisible. ISO 25010 performance-efficiency.time-behaviour requires time-behaviour to be proportionate to the task being performed; serving 2MB of editor JavaScript to a user reading a blog post is not proportionate.
Why this severity: Low because heavy static imports increase parse time on unrelated pages, but they do not prevent functionality and often go unnoticed until measured.
performance-load.bundle.dynamic-importsSee full patternA font loaded without `font-display: swap` defaults to `auto` behavior — which blocks text rendering for up to 3 seconds while the font file downloads. During this Flash of Invisible Text (FOIT), users see a blank page where content should be. Google Fonts requests without the `&display=swap` parameter apply this blocking behavior by default. A single Google Fonts link loading 4 weights without `swap` can delay text visibility by 300–800ms on a 4G connection, directly harming LCP. ISO 25010 performance-efficiency.time-behaviour captures this as a preventable delay in delivering the primary content to the user.
Why this severity: Medium because blocking font-display causes Flash of Invisible Text — a visible content delay for real users — but does not prevent the page from eventually loading.
performance-load.bundle.font-loadingSee full patternA synchronous third-party script in `<head>` blocks the browser's HTML parser for the duration of the network request and script execution — stalling everything: layout, rendering, and any other script that follows. A single slow analytics script hosted on an external CDN can delay First Contentful Paint by 500ms–2s if that CDN has any latency. This is entirely under the developer's control: `async` tells the browser to download the script in parallel and execute it when ready; `defer` executes after the document is parsed. Neither attribute changes the script's behavior for tracking or analytics purposes.
Why this severity: Medium because each synchronous third-party script adds a hard blocking delay to page rendering, and the delay is entirely controlled by an external CDN with no SLA.
performance-load.bundle.third-party-scriptsSee full patternA static marketing page with `export const dynamic = 'force-dynamic'` re-renders on every request — wasting server compute on content that hasn't changed, adding 100–500ms of server processing time compared to serving a cached HTML file, and forcing the CDN to bypass its edge cache entirely. Conversely, a personalized dashboard statically generated at build time serves stale data that doesn't reflect the current user's state. Mismatched rendering strategies cause either unnecessary cost (static content served dynamically) or incorrect behavior (dynamic content served statically). ISO 25010 performance-efficiency.time-behaviour requires that system response time reflect the minimum work necessary.
Why this severity: High because a mismatched rendering strategy either wastes server resources on unchanged content or serves stale data to users who expect real-time personalization.
performance-load.rendering.appropriate-renderingSee full patternSearch engines index the HTML delivered by the server. A page that renders all content via client-side JavaScript sends search bots an empty `<div id='root'>` — the product description, pricing, and metadata are invisible until JavaScript executes, which Googlebot may not fully process or may process with a crawl budget delay. This directly harms organic search ranking for the pages most important to business discovery. Beyond SEO, client-side-only rendering means the user sees a blank page until JavaScript loads, parses, and executes — adding 1–4 seconds of blank white screen on slow connections. For SEO-critical pages, this is a dual failure: discoverability and user experience.
Why this severity: Critical because CSR-only SEO pages are effectively invisible to search engines, causing permanent ranking loss for the pages most critical to business discovery.
performance-load.rendering.no-client-only-seoSee full patternSetting `ssr: false` on above-the-fold components — the header, hero, and navigation — means the server sends empty HTML for those regions, and users see a blank viewport until JavaScript hydrates. This damages First Contentful Paint (FCP) and the perceived speed of the page, even when the full page eventually loads quickly. Browser pre-rendering and crawler indexing also fail for these regions: search engine snapshots taken before hydration miss the content entirely. ISO 25010 performance-efficiency.time-behaviour defines time-behaviour as the time from user action to visible response — a blank viewport is a failure of that response.
Why this severity: Low because hydration blocking typically causes a brief flash of blank content rather than a total failure, but on slow connections it extends to several seconds of blank above-the-fold space.
performance-load.rendering.hydration-strategySee full patternAsync components that render blank space while fetching data feel broken to users — they perceive the app as frozen, tap repeatedly, or abandon the page before content arrives. On 3G and flaky mobile networks the gap between navigation and first paint can stretch to several seconds, directly degrading Interaction to Next Paint and Cumulative Layout Shift (Core Web Vitals that feed Google ranking signals). Missing loading states also mask real failures: a stalled fetch looks identical to a successful empty state, so users cannot tell whether to wait or retry.
Why this severity: Low because the UX feels sluggish but functionality still works once data eventually arrives.
performance-load.rendering.loading-statesSee full patternSequential API requests that could run in parallel add their round-trip times together: if fetching a user profile takes 200ms and fetching their posts takes 150ms, a waterfall loads in 350ms; a parallel fetch completes in 200ms. Across a dashboard that chains four independent requests, this compounds to 600–800ms of avoidable latency before the page renders any data. For server components in Next.js, parallel fetching is a single line change (`Promise.all`). Request waterfalls are one of the most common and highest-impact performance issues in data-heavy applications, and they scale worse as API latency grows. ISO 25010 performance-efficiency.time-behaviour requires minimizing response time for the same work.
Why this severity: High because each chained sequential request multiplies the total page load latency by the number of chain links, with real-world impact of hundreds of milliseconds to seconds.
performance-load.rendering.no-waterfall-requestsSee full patternStatic assets served without long-lived `Cache-Control` headers force every returning visitor to re-download CSS, JavaScript, and image files they already have — identical bytes transferred repeatedly for no reason. A 500KB bundle re-downloaded on every visit costs 500KB × daily_active_users in bandwidth per day. With `Cache-Control: public, max-age=31536000, immutable`, that same bundle is downloaded once per user per year. For users on metered connections or high-latency networks, the cache miss is also the difference between a 500ms and a 3-second load. ISO 25010 performance-efficiency.resource-utilization flags repeat downloads of identical resources as a direct waste.
Why this severity: High because missing cache headers on static assets force full re-downloads on every page visit, wasting bandwidth and adding measurable latency for returning users.
performance-load.caching.static-cache-headersSee full patternWithout content-hash filenames, long-lived cache headers create a contradiction: you want assets cached forever, but you also need users to get updated JavaScript after a deploy. Non-hashed filenames like `main.js` force a compromise — either short caches that cause repeat downloads, or long caches that serve stale JavaScript after deploys. Content hashing resolves this: `main.a3f92b.js` is immutable by definition — the hash changes whenever the content changes, guaranteeing users always get the latest code on deploy while keeping the file cached indefinitely between deploys. ISO 25010 performance-efficiency.resource-utilization requires that caching strategies actually eliminate redundant transfers.
Why this severity: Medium because non-hashed filenames force a cache-duration tradeoff that either penalizes return visitors with re-downloads or risks serving stale JavaScript after deploys.
performance-load.caching.immutable-hashingSee full patternAn API endpoint that returns product listings or blog posts without a `Cache-Control` header causes every client request to hit the origin server — even when the data hasn't changed in hours. At scale, this is avoidable server load that accumulates into real infrastructure cost and latency. Conversely, a user-specific endpoint (account data, cart, session) served with `Cache-Control: public` allows CDN or browser caching to serve one user's data to another — a privacy violation. ISO 25010 performance-efficiency.resource-utilization applies: public data should be cached to eliminate redundant computation; private data must be protected from shared caches.
Why this severity: Low because missing API cache headers waste server resources on repeated identical requests, but the impact is gradual and does not block functionality until traffic scales.
performance-load.caching.api-cache-strategySee full patternGzip compression reduces HTML, CSS, and JavaScript file sizes by 60–80%; Brotli achieves 15–25% better compression than Gzip. A 500KB JavaScript bundle uncompressed is typically 150KB gzipped — a 350KB savings on every cold load. Without compression enabled on a custom server, every user downloads the raw uncompressed file. Managed platforms (Vercel, Netlify, Cloudflare) enable Brotli automatically, but projects deploying to a custom Express server, Docker container, or bare nginx config must configure it explicitly. ISO 25010 performance-efficiency.resource-utilization identifies uncompressed text delivery as a direct and avoidable resource waste.
Why this severity: Low because managed hosting platforms enable compression automatically, limiting this to custom infrastructure deployments — but on custom servers the oversight causes consistent 60-80% bandwidth waste.
performance-load.caching.compression-enabledSee full patternEvery third-party origin — Google Fonts, an analytics service, a Stripe CDN — requires a DNS lookup, TCP handshake, and TLS negotiation before the first byte can transfer. On a cold connection, this setup cost takes 100–300ms per origin. A page loading fonts from Google Fonts, scripts from an analytics CDN, and assets from Stripe can accumulate 400–900ms of connection overhead that runs sequentially if browsers encounter these origins mid-parse. `<link rel="preconnect">` tells the browser to complete the TCP+TLS handshake during HTML parsing, eliminating that overhead by the time the asset is requested. ISO 25010 performance-efficiency.time-behaviour defines this as an optimizable latency in the system's response pathway.
Why this severity: Info because preconnect hints are an incremental optimization — missing them adds measurable but not catastrophic latency — and their impact depends heavily on network conditions.
performance-load.caching.preconnect-hintsSee full patternRun this audit in your AI coding tool (Claude Code, Cursor, Bolt, etc.) and submit results here for scoring and benchmarks.
Open Performance & Load Readiness Audit