A domain-scoped assistant without a scope boundary becomes a general-purpose chatbot the moment a user asks anything off-topic. Customer-support bots start giving medical advice, code assistants opine on tax law, and the company inherits liability for confabulated answers outside its expertise. This violates the inference-contract taxon — you shipped a support product and delivered an unconstrained LLM. Worse, out-of-scope answers are where hallucination rates spike, because the model has no grounding.
Medium because scope leakage expands liability and hallucination surface beyond the product's tested domain.
Write an explicit scope statement in the system prompt naming the domain, and define a graceful refusal template that redirects users rather than terse-rejecting them. Update lib/ai/prompts.ts with:
const systemPrompt = `You are a support assistant for [Product]. You answer installation, configuration, billing, and usage questions. For anything else, reply: "That's outside what I can help with here. For [topic], try [resource]."`
ID: ai-response-quality.hallucination-prevention.out-of-scope-refusal
Severity: medium
What to look for: Enumerate all relevant files and For applications with a defined scope (domain-specific assistants, customer support bots, code assistants, etc.), check whether the system prompt defines the scope boundary and instructs the AI to decline out-of-scope requests gracefully. Look for scope definition language in the system prompt — "You are a customer support agent for [Company] and answer only questions about [Product]", "Do not answer questions unrelated to [domain]". Check for a graceful refusal pattern ("That's outside what I can help with here — for that, you might try...") rather than a terse rejection.
Pass criteria: At least 1 conforming pattern must exist. System prompt defines the application's scope and includes a graceful out-of-scope refusal pattern that redirects the user rather than abruptly refusing.
Fail criteria: Application has a narrow-scope use case (customer support, domain-specific assistant) but the system prompt has no scope boundary or refusal instruction — AI will answer any question regardless of domain.
Skip (N/A) when: Application is explicitly a general-purpose assistant with no scope restrictions.
Detail on fail: "Customer support chatbot system prompt has no scope boundary — AI will answer unrelated questions and potentially confabulate" (max 500 chars)
Remediation: Define scope and a graceful refusal in your system prompt:
const systemPrompt = `
You are a support assistant for [Product]. You answer questions about installation,
configuration, billing, and usage. If a user asks about something outside this scope,
respond politely: "That's outside what I can help with here. For [topic], I'd suggest
[relevant resource or next step]."
`