AI-built projects that ship without tests accumulate risk with every change. There is no safety net when modifying working code — a refactor that breaks authentication or payment processing goes undetected until a user reports it. ISO 25010 maintainability.testability directly measures this gap. SSDF PW.8 requires testing as a software assurance control. The absence of tests is especially costly in AI-assisted development where the model makes structural changes across sessions that can silently break previously working behavior.
High because untested critical paths — authentication, payments, data validation — have no automated regression detection, making every future change a gamble against undetected breakage.
Start with the highest-value tests: critical business logic and API route behavior. Set up a test runner:
npm install -D vitest @vitest/coverage-v8
# or: npm install -D jest @types/jest ts-jest
Write your first test covering the most critical logic path:
// tests/lib/scoring.test.ts
import { computeCategoryScore } from '../lib/scoring'
describe('computeCategoryScore', () => {
it('returns null when all checks are skipped', () => {
const checks = [{ result: 'skip' }, { result: 'skip' }]
expect(computeCategoryScore(checks)).toBeNull()
})
})
Five targeted tests covering auth, payments, and core logic are more valuable than fifty snapshot tests of static UI components.
ID: code-maintainability.code-hygiene.tests-exist
Severity: high
What to look for: Check for test files: *.test.ts, *.test.tsx, *.spec.ts, *.spec.tsx, and __tests__/ directories. Evaluate what is tested:
Note: the check does not require high coverage — it checks for the presence of meaningful tests, not coverage percentages. A project with 5 well-targeted tests covering critical paths passes. A project with 50 tests only covering trivial UI snapshots may not.
Pass criteria: Count all test files (.test., .spec., tests/) in the project excluding node_modules. Test files exist AND at least 1 test covers a critical path (authentication flow, core business logic, data validation, or API route behavior). Report the count even on pass: "Found X test files covering Y critical paths."
Fail criteria: No test files found in the project (excluding node_modules), OR test files exist but exclusively cover trivial UI rendering with no coverage of business logic, API routes, or validation. Do NOT pass when test files exist but contain only empty test suites or placeholder it.todo() calls with no actual assertions.
Cross-reference: For pre-commit hooks that could run tests before commit, see the git-hooks check in this audit.
Skip (N/A) when: The project is a static site or purely presentational UI with no business logic or API routes to test. Signal: no API routes, no authentication, no data processing logic.
Detail on fail: "No test files found in the project (no *.test.ts, *.spec.ts, or __tests__/ directories outside node_modules). Auth logic, payment handling, and API routes have no test coverage." or "Test files exist but only contain snapshot tests of static UI components — no tests for authentication, payment processing, or API route validation."
Remediation: Tests are the primary mechanism for catching regressions when code changes. AI-built projects that ship without tests accumulate risk with every iteration — there's no safety net when modifying working code.
Start with the highest-value tests first:
// tests/lib/scoring.test.ts — test your critical business logic
import { computeCategoryScore } from '../lib/scoring'
describe('computeCategoryScore', () => {
it('returns null when all checks are skipped', () => {
const checks = [{ result: 'skip' }, { result: 'skip' }]
expect(computeCategoryScore(checks)).toBeNull()
})
it('excludes skipped checks from denominator', () => {
// ensures skip doesn't penalize score
})
})
Set up testing with:
npm install -D vitest @vitest/coverage-v8
# or: npm install -D jest @types/jest ts-jest
For pre-launch readiness including test coverage requirements, the Pre-Launch Readiness Audit covers this in detail.