Without response time percentiles and throughput baselines, every deployment is a blind release — you cannot distinguish a 20% p95 regression from normal variance, or confirm that a refactor didn't silently degrade performance. ISO 25010 performance-efficiency.time-behaviour and NIST AU-6 require that performance data be collected and reviewed. Teams without p50/p95 dashboards measure performance by user complaints, which means regressions are invisible until they're severe enough for users to notice and report.
Info because performance metrics are an observability gap rather than an active failure mode — the system works, but you cannot see whether it is degrading or whether deployments improve or worsen response times.
Instrument your app with Datadog RUM or a comparable APM to collect p50, p95, error rate, and throughput.
npm install @datadog/browser-rum
// src/lib/monitoring.ts
import { datadogRum } from '@datadog/browser-rum';
datadogRum.init({
applicationId: process.env.NEXT_PUBLIC_DATADOG_APP_ID!,
clientToken: process.env.NEXT_PUBLIC_DATADOG_CLIENT_TOKEN!,
site: 'datadoghq.com',
service: 'my-app',
env: process.env.NODE_ENV,
version: process.env.NEXT_PUBLIC_APP_VERSION,
sessionSampleRate: 100,
trackUserInteractions: true,
});
In the Datadog dashboard, create timeseries panels for: p50 and p95 response time, error rate (5xx%), and requests per minute. Set a 30-day baseline window. Enable deployment markers so each production deploy appears as an annotation on the chart.
ID: deployment-readiness.environment-configuration.performance-metrics
Severity: info
What to look for: Enumerate every relevant item. Look for monitoring/APM integration (New Relic, Datadog, Grafana, AWS CloudWatch). Verify metrics are being collected: response time (p50, p95), error rate, throughput. Check if metrics are visible in a dashboard with baseline comparisons or trends.
Pass criteria: At least 1 of the following conditions is met. Performance metrics are collected and visible in a monitoring dashboard. At minimum: response time percentiles (p50, p95), error rate, and throughput. Historical baselines are available for comparison.
Fail criteria: No metrics collected, or metrics exist but no dashboard, or only basic metrics without percentiles or baselines.
Skip (N/A) when: The project is not in production yet.
Detail on fail: "No monitoring dashboard found. Performance metrics are not being collected." or "Datadog collects response time but does not show percentiles (p50, p95)."
Remediation: Set up monitoring. Using Datadog or similar:
npm install @datadog/browser-rum @datadog/browser-logs
Initialize in your app:
import { datadogRum } from '@datadog/browser-rum';
datadogRum.init({
applicationId: process.env.DATADOG_APP_ID,
clientToken: process.env.DATADOG_CLIENT_TOKEN,
site: 'datadoghq.com',
service: 'your-app',
env: process.env.NODE_ENV,
version: process.env.APP_VERSION,
});
datadogRum.startSessionReplayRecording();
Then in Datadog dashboard, create a dashboard with panels for: