The EU AI Act requires that users be notified when they are interacting with an AI system in certain contexts, and the principle is extending across jurisdictions. When AI-generated responses are visually indistinguishable from human-authored content, users cannot make informed decisions about how much to trust or act on the output. OWASP LLM09 (Misinformation) classifies unmarked AI output as a reliability risk — users who mistake AI responses for authoritative human answers are more likely to act on hallucinated information. Disclosure is the minimum accountability signal an AI-powered product can provide.
Low because the missing indicator is a transparency deficiency rather than a direct attack vector, but it enables harm through uninformed over-reliance on AI-generated content.
Add a visual label to every component that renders AI-generated content. An accessible aria label ensures screen reader users also receive the disclosure.
// components/ai-message.tsx
export function AiMessage({ content }: { content: string }) {
return (
<div className="ai-response" role="article" aria-label="AI-generated response">
<span className="text-xs text-muted-foreground flex items-center gap-1">
<SparklesIcon className="h-3 w-3" aria-hidden="true" />
AI response
</span>
<p>{content}</p>
</div>
)
}
Apply this to every rendering path — streaming responses, cached responses, and fallback states. The label should be visible without hover; users should not need to inspect the UI to know the content is AI-generated.
ID: ai-data-privacy.data-collection-consent.ai-processing-indicator
Severity: low
What to look for: Enumerate every relevant item. Examine UI components that display AI-generated content or AI responses. Look for labels, icons, badges, or aria attributes that distinguish AI-generated output from human/static content. Search for text strings like "AI", "Generated by", "Powered by", "AI response", sparkle icons (✦, ✨ as text/SVG references), or similar in component files that render AI output.
Pass criteria: At least 1 of the following conditions is met. Components rendering AI-generated content include at least one visual indicator that identifies the content as AI-generated. This can be an icon, a label, a tooltip, or an accessible aria label.
Fail criteria: AI-generated responses are rendered identically to user messages or static content with no visual distinction.
Skip (N/A) when: The AI is used purely for backend tasks (classification, moderation, spam detection) with no AI-generated text presented directly to users.
Cross-reference: For user-facing accessibility and compliance, the Accessibility Basics audit covers foundational requirements.
Detail on fail: "Components rendering AI responses in [file(s)] contain no visible indicator distinguishing AI output from other content"
Remediation: Transparency about AI-generated content is required under the EU AI Act for certain use cases and is best practice universally. Users should know when they are reading machine-generated text.
Add a simple indicator to your AI response component:
// components/ai-message.tsx
export function AiMessage({ content }: { content: string }) {
return (
<div className="ai-response" role="article" aria-label="AI-generated response">
<span className="text-xs text-muted-foreground flex items-center gap-1">
<SparklesIcon className="h-3 w-3" />
AI response
</span>
<p>{content}</p>
</div>
)
}