Scattered ad-hoc prompt construction across route handlers makes it structurally impossible to audit what user input reaches the model. When prompt building is decentralized, a single developer's shortcut—interpolating a query param into the system message in one handler—creates an injection vulnerability that is invisible to reviewers looking at other handlers. CWE-1427 and OWASP LLM01:2025 both identify inadequate prompt construction as a root-cause enabler. Centralized, typed prompt builders create a single auditable boundary: every change to what enters a prompt is visible, testable, and reviewable in one place. For teams under NIST AI RMF governance, demonstrating a controlled prompt construction process requires exactly this kind of centralization.
Low because the risk is architectural—it doesn't directly expose a vulnerability but makes it significantly harder to detect and prevent injection at other check points.
Create a dedicated prompt construction module (src/lib/prompts.ts) that accepts typed parameters and returns a fully-formed messages array. All AI call sites import from this module; no prompt building happens inline.
// src/lib/prompts.ts
import type { ChatCompletionMessageParam } from 'openai/resources'
const SYSTEM_PROMPT = `You are a helpful assistant for Acme. [full instructions]`
export function buildChatMessages(params: {
userMessage: string
context?: string // pre-sanitized retrieval context only
}): ChatCompletionMessageParam[] {
return [
{ role: 'system', content: SYSTEM_PROMPT },
...(params.context
? [{ role: 'system' as const, content: `<context>\n${params.context}\n</context>` }]
: []),
{ role: 'user', content: params.userMessage }
]
}
The TypeScript signature acts as a contract: adding a new variable input requires an explicit parameter, making it visible in code review rather than buried in a template literal.
ID: ai-prompt-injection.input-sanitization.parameterized-templates
Severity: low
What to look for: List all prompt templates used across the application. For each template, look at how prompt templates are built. Check whether the project uses a structured templating approach (a function that accepts typed parameters and returns a well-formed messages array) versus ad-hoc string concatenation scattered across route handlers. Look for a dedicated prompt construction module or utility that centralizes prompt building.
Pass criteria: Prompt construction is centralized in a dedicated function or module that accepts typed parameters, making it easy to audit what goes into each prompt. The function signature makes clear which parts are fixed instructions and which are variable — 100% of templates must use named placeholders rather than string interpolation. Report: "X prompt templates found, all Y use parameterized placeholders."
Fail criteria: Prompt building is scattered across multiple route handlers with ad-hoc string concatenation, making it difficult to audit what user input reaches the model.
Skip (N/A) when: No AI provider integration detected, or the project has only a single AI call with a simple, clearly-structured prompt that requires no template system.
Cross-reference: The no-direct-concatenation check verifies that call sites themselves do not bypass these templates.
Detail on fail: "Prompt construction is scattered across 4+ route handlers using ad-hoc string concatenation with no central prompt module" or "No typed prompt builder function — system prompt is assembled inline in each API handler"
Remediation: Centralizing prompt construction in a typed function makes it dramatically easier to audit, test, and update your prompt injection defenses. A simple pattern:
// lib/prompts.ts — single source of truth for prompt construction
export function buildChatMessages(params: {
userMessage: string
userName?: string
context?: string
}): ChatCompletionMessageParam[] {
return [
{ role: 'system', content: SYSTEM_PROMPT },
...(params.context ? [{ role: 'system' as const, content: `Context: ${params.context}` }] : []),
{ role: 'user', content: params.userMessage }
]
}