LLM responses arrive token-by-token, and markdown syntax gets sliced mid-tag during delivery — an unclosed code fence, a half-written bold marker, a partial list item. A naive renderer flashes raw **text** and broken layouts between tokens, then suddenly snaps to formatted output when the stream completes. That flicker reads as broken software to end users and destroys the perceived quality of an otherwise functional AI feature, hitting both the user-experience and performance taxons this pattern covers.
Info because the output is still delivered correctly; only the rendering polish during streaming is degraded.
Wrap the streaming message body in react-markdown with remark-gfm, then memoize the component with React's memo so incoming tokens do not re-render sibling messages. The markdown parser handles incomplete syntax gracefully, and memoization prevents the entire conversation list from recomputing on every token. Implement in src/components/message-content.tsx.
import { memo } from "react";
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";
export const MessageContent = memo(function MessageContent({ content }: { content: string }) {
return <ReactMarkdown remarkPlugins={[remarkGfm]}>{content}</ReactMarkdown>;
});
ID: ai-token-optimization.streaming-performance.streaming-partial-render
Severity: info
What to look for: In the React component that renders streaming AI responses, check whether the markdown renderer or text display can handle incomplete content gracefully. Look for react-markdown with appropriate plugins, MemoizedReactMarkdown patterns, or other renderers that handle partial markdown syntax (e.g., an unclosed code fence ``` mid-stream). Also look for memo wrapping of message components to prevent full re-render on each incoming token. Count all instances found and enumerate each.
Pass criteria: The UI renders streamed tokens progressively without layout breaks, raw syntax display, or full component re-renders on each token. Markdown formatting begins rendering correctly even when the stream is mid-sentence or mid-code-block. At least 1 implementation must be confirmed.
Fail criteria: The UI shows raw markdown syntax during streaming (e.g., **bold** instead of bold), breaks layout until the stream completes, or causes excessive re-renders that make the UI janky during streaming.
Skip (N/A) when: The application only outputs plain text (no markdown formatting) or the AI feature is non-interactive and does not render to a user interface.
Signal: No markdown rendering library detected (react-markdown, marked, remark) and AI output is displayed in a plain <pre> or <p> element.
Cross-reference: The streaming-error-handling check verifies error recovery during the streaming flow validated here.
Detail on fail: "Streaming responses render raw markdown or cause layout breaks until stream completes"
Remediation: Streaming cuts markdown syntax mid-tag during delivery. A renderer that cannot handle partial content will flicker between raw syntax and formatted output, creating a poor visual experience.
// src/components/message-content.tsx
import { memo } from "react";
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";
// Memoize to avoid re-rendering the entire list on each new token
export const MessageContent = memo(function MessageContent({
content,
}: {
content: string;
}) {
return (
<ReactMarkdown
remarkPlugins={[remarkGfm]}
components={{
// Prevent layout jumps by using consistent container elements
p: ({ children }) => <p className="mb-2 last:mb-0">{children}</p>,
code: ({ inline, children, ...props }) =>
inline ? (
<code className="bg-muted px-1 rounded text-sm" {...props}>{children}</code>
) : (
<pre className="bg-muted p-3 rounded overflow-x-auto">
<code {...props}>{children}</code>
</pre>
),
}}
>
{content}
</ReactMarkdown>
);
});
Verify by generating a response with code blocks and markdown formatting — the UI should render formatting progressively without showing raw syntax mid-stream.