Without a performance baseline before launch, any future deployment that regresses Core Web Vitals (LCP, CLS, INP) goes undetected until users notice the slowness. ISO 25010:2011 performance-efficiency.time-behaviour defines response time as a measurable quality attribute — not measuring it means you cannot detect regressions. Beyond technical debt, Lighthouse performance scores directly influence Google Search ranking via Core Web Vitals signals: an unmeasured, slow launch can suppress organic traffic from day one without any visible indicator to the developer.
Low because the absence of a performance baseline produces delayed detection of regressions rather than immediate harm, but it prevents the team from distinguishing intentional performance trade-offs from accidental degradation.
Record a Lighthouse baseline against your production URL immediately after launch and store the report in the repository.
# Record baseline — save output to repo
npx lighthouse https://yoursite.com --output=json --output-path=./lighthouse-baseline.json
For automated enforcement, install @lhci/cli and create .lighthouserc.json with performance score thresholds — Lighthouse CI runs on every PR and fails the build if scores drop below your baseline. At minimum, add your performance targets to the README: Target: LCP < 2.5s, CLS < 0.1, Performance score ≥ 80. Add the web-vitals package to track real-user Core Web Vitals from production traffic.
ID: pre-launch.final-verification.performance-baseline
Severity: low
What to look for: Count all performance measurement configurations. Enumerate whether Lighthouse scores or Core Web Vitals baselines have been recorded for the production site. Check for any performance documentation: a Lighthouse report in the repo, performance metrics in README, a performance.md or similar document, or comments in code noting expected load times. Look for Lighthouse CI configuration (.lighthouserc.json, .lighthouserc.js) or web-vitals tracking integration (web-vitals in dependencies, onCLS, onFID, onLCP function calls).
Pass criteria: Some evidence of performance measurement or baseline exists: a Lighthouse CI config, web-vitals tracking in the codebase, a performance report file, or README documentation of performance targets. At least 1 performance baseline measurement must be recorded with Lighthouse score at least 60.
Fail criteria: No evidence of any performance measurement or baseline documentation.
Skip (N/A) when: Skip for API-only projects with no user-facing pages. Signal: project type is api or cli with no HTML pages.
Cross-reference: For error monitoring that includes performance, see error-monitoring. For analytics, see analytics.
Detail on fail: "No performance baseline documented — you will not know if a future change regresses load time without a reference point"
Remediation: Having a performance baseline before launch means you can detect regressions immediately. This doesn't require extensive tooling:
# Record baseline with Lighthouse
npx lighthouse https://yoursite.com --output=json --output-path=./lighthouse-baseline.json
@lhci/cli and create .lighthouserc.json with your performance thresholds.web-vitals tracking to log Core Web Vitals (LCP, FID/INP, CLS) from real users.For a comprehensive analysis of performance and load readiness, the Performance & Load Readiness Audit covers this in depth.