Language models hallucinate — they generate plausible-sounding but factually incorrect output with no inherent signal to the user that something is wrong. NIST AI RMF MAP 5.1 explicitly calls out the risk of users treating AI output as authoritative. OWASP LLM09 categorizes this as a misinformation risk. Without a visible accuracy disclaimer, users who act on incorrect AI-generated advice — whether medical, legal, financial, or procedural — have no prior warning that verification was expected. The developer carries moral and increasingly legal exposure when that warning is absent.
Info because the absence of a disclaimer is a transparency gap rather than a technical vulnerability, but it increases user harm from hallucinated outputs and reduces the developer's liability defense.
Add a single line of helper text near the AI chat input. Keep it short — a long legal disclaimer that users learn to ignore is worse than none.
// Near the AI chat input
<p className="text-xs text-muted-foreground mt-1">
AI can make mistakes. Verify important information before acting on it.
</p>
Place it below the input field or as persistent footer text in the chat panel, not in an onboarding modal the user dismisses once and never sees again. For domains where hallucination is high-stakes (medical, legal, financial), add domain-specific language: "Not medical advice. Consult a healthcare professional before acting on health information."
ID: ai-data-privacy.data-collection-consent.ai-accuracy-disclaimer
Severity: info
What to look for: Enumerate every relevant item. Search UI components near the AI input/output area for disclaimer text. Look for strings like "may make mistakes", "can be inaccurate", "verify important information", "AI can hallucinate", or similar. Check onboarding modals, chat interface components, and help text near AI feature entry points.
Pass criteria: At least 1 of the following conditions is met. A disclaimer about AI accuracy limitations is visible near the AI interface — either inline below the input, in an onboarding modal, or as persistent footer text.
Fail criteria: No accuracy disclaimer found anywhere near the AI interface components.
Skip (N/A) when: The AI is used for deterministic classification tasks where the output space is fixed and hallucination is not applicable (e.g., a sentiment classifier returning only "positive" or "negative").
Detail on fail: "No AI accuracy disclaimer found in components rendering the AI interface — users are not warned about potential inaccuracies"
Remediation: A brief disclaimer sets appropriate expectations and reduces liability.
Add a single line near the AI chat input:
<p className="text-xs text-muted-foreground mt-1">
AI can make mistakes. Verify important information before acting on it.
</p>
This is intentionally minimal — a sentence is enough. Don't let it become a wall of legal text that users learn to ignore.