A chat UI that sits frozen for six seconds after submit looks broken. Users double-click submit (producing duplicate requests), refresh the page (losing the in-flight response), or assume the product crashed and leave. Streaming tokens or at minimum an animated typing indicator is the feedback loop that tells the user the system is working — and with modern LLM APIs that return a stream natively, there is no technical reason not to ship it.
Low because it hurts perceived performance but does not cause functional failures.
Prefer token streaming — the Vercel AI SDK's useChat hook streams by default and requires no additional indicator because characters appear as they generate. If you're on a non-streaming path, render a three-dot bouncing indicator beneath the last user message while isLoading is true. Implement in src/components/chat/message-list.tsx.
{isLoading && <div className="flex gap-1"><span className="animate-bounce">.</span><span className="animate-bounce">.</span><span className="animate-bounce">.</span></div>}
ID: ai-ux-patterns.advanced-patterns.typing-indicator
Severity: low
What to look for: Count all AI generation entry points (chat submissions, completion triggers). For each, enumerate the visual feedback shown during generation: streaming token output, animated ellipsis, skeleton element, spinner, or typing indicator. Classify each as: (a) token-by-token streaming (ideal), (b) loading indicator (good), or (c) no indicator (fail). At least 1 generation entry point must have visual feedback.
Pass criteria: While the AI is generating, the user sees either streaming token output or a visible loading/typing indicator. There is no period where the UI appears frozen or unresponsive waiting for a response. Report on pass: "X of Y generation entry points have visual feedback."
Fail criteria: No loading indicator and no streaming display. The UI appears frozen or shows no feedback while waiting for the AI response.
Skip (N/A) when: Same as regeneration-button.
Detail on fail: "No streaming display or loading indicator detected — UI state during AI generation shows no visual feedback to the user".
Remediation: The typing indicator communicates "the AI is working" and prevents users from thinking the interface has crashed.
{isLoading && (
<div className="flex items-center gap-2 px-4 py-2 text-muted-foreground text-sm">
<div className="flex gap-1">
<span className="w-1.5 h-1.5 bg-current rounded-full animate-bounce [animation-delay:-0.3s]" />
<span className="w-1.5 h-1.5 bg-current rounded-full animate-bounce [animation-delay:-0.15s]" />
<span className="w-1.5 h-1.5 bg-current rounded-full animate-bounce" />
</div>
AI is thinking...
</div>
)}
Streaming (preferred — no additional loading indicator needed):
// With Vercel AI SDK useChat, messages stream token-by-token automatically
// The partial message content appears as it generates
{messages.map(message => (
<div key={message.id}>{message.content}</div>
))}