Language models have a training cutoff and cannot know about events, legislation, software releases, or market conditions that postdate it. When an application provides no disclosure — neither in the system prompt nor the UI — users treat AI responses as current, leading to decisions based on outdated information. For compliance-sensitive domains (GDPR enforcement updates, securities regulations, medical guidelines), acting on stale AI output can cause real harm. NIST AI RMF GOVERN-1.1 requires transparency about AI system limitations. OWASP LLM09 identifies temporal misinformation as an LLM risk category.
High because users without cutoff disclosure will act on stale AI information as if it were current, with no signal that facts, laws, or specifications may have materially changed.
Add a cutoff disclosure to the system prompt with the model's specific training cutoff date:
const systemPrompt = `
Your knowledge cutoff is [MODEL_CUTOFF_DATE]. For questions about recent events,
current prices, active legislation, or software versions, proactively note that
your information may be outdated and direct the user to verify with a current source.
`
Alternatively, add a static footer or tooltip in the chat UI — for example, src/components/chat/ChatPanel.tsx — displaying: "AI responses reflect training data through [date]. Verify time-sensitive information independently."
ID: ai-response-quality.source-attribution.knowledge-cutoff-disclosure
Severity: high
What to look for: Enumerate all relevant files and Check the system prompt for any mention of the model's training cutoff date, or for instructions directing the AI to proactively disclose when a question involves current events, recent data, or time-sensitive information that may be outside its training window. Check whether the application's UI includes any static disclosure about AI knowledge limitations (e.g., "This AI's knowledge has a cutoff of [date]" in a footer or info tooltip).
Pass criteria: At least 1 implementation must be present. Either (a) the system prompt instructs the AI to proactively flag time-sensitive topics with a knowledge cutoff caveat, or (b) the application UI displays a static disclosure about the AI's knowledge limitations.
Fail criteria: No cutoff disclosure mechanism exists in either the system prompt or the UI for an application that answers questions about real-world facts, events, or current information.
Skip (N/A) when: Application is strictly a code assistant, document summarizer, or similar tool where knowledge currency is irrelevant to the use case.
Detail on fail: "No knowledge cutoff instruction in system prompt and no static UI disclosure — users may not know AI responses could be outdated" (max 500 chars)
Remediation: Add a cutoff disclosure to your system prompt and/or UI:
const systemPrompt = `
Your knowledge has a cutoff date of [MODEL_CUTOFF_DATE]. For questions about
recent events, laws, prices, software versions, or anything that may have changed,
proactively note that your information may be outdated and suggest the user verify
with a current source.
`
Alternatively, add a visible note in the chat UI: "AI responses are based on training data with a knowledge cutoff of [date]."