A handful of JavaScript APIs parse strings as live code or live HTML — dangerouslySetInnerHTML, eval(...), new Function(...), document.write(...), and direct element.innerHTML = ... assignment. Each one turns a string into executable behavior. When the string originates from user-controllable input — a request body, URL parameter, cookie, or database row that was itself populated from user input — and reaches the sink without sanitization, the result is stored or reflected XSS, arbitrary script execution, and in server contexts potential RCE via new Function(). The 2018 British Airways Magecart attack skimmed 380,000 customer payment cards through exactly this pattern: an injected script reached a rendering sink that trusted its input. The Information Commissioner's Office fined BA £20 million under GDPR Article 32. OWASP ranks Injection as the #3 web risk in its 2021 Top 10. AI coding tools produce this pattern whenever they scaffold a "rich text preview", a "markdown renderer", or a "dynamic expression evaluator" without wiring in a sanitizer — the happy path works, the attack path is invisible until exploited.
Critical because a single path from user input to a live-HTML or live-code sink produces client-side XSS (session-cookie theft, account takeover) or server-side RCE — the full Magecart / supply-chain-compromise playbook in a single unsanitized line.
Route every user-sourced string through a sanitizer before it reaches a dangerous sink:
import DOMPurify from 'isomorphic-dompurify';
export function UserPost({ html }: { html: string }) {
const clean = DOMPurify.sanitize(html, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'br', 'ul', 'ol', 'li'],
ALLOWED_ATTR: ['href'],
});
return <div dangerouslySetInnerHTML={{ __html: clean }} />;
}
Never pass user input to eval(...) or new Function(...) at all — those APIs have no safe usage in user-facing code. For expression-evaluator use cases, use a sandboxed library like expr-eval or jsep that parses to an AST rather than eval'ing strings. For a deeper XSS-audit pass including CSP nonce integration, DOMPurify version pinning, and dependency-graph sink discovery, run the security-hardening and ai-slop-security-theater Pro audits.
project-snapshot.security.dangerous-sinks-not-fed-user-contentcritical.{tsx,jsx,ts,js} (excluding node_modules, dist, build, .next) for: dangerouslySetInnerHTML, eval(, new Function(, document.write(, regex \.innerHTML\s*=. For each match, trace whether the value originates (within 2 import hops) from a tainted source: req.body.*, req.json(), req.formData(), params.*, searchParams.*, useSearchParams().get(...), cookies().get(...), formData.get(...), Prisma/Supabase rows populated by user input. If the value passes through a library-grade sanitizer (DOMPurify.sanitize, sanitize-html, xss, isomorphic-dompurify) before the sink, it's safe..replace(/<script>/g, '') does NOT count — regex-based HTML sanitization is defeated by malformed tags, nested encoding, SVG event handlers, and dozens of known bypasses. Only library-grade sanitizers count.innerHTML / eval usage."3 dangerouslySetInnerHTML calls, all passing static markdown compiled at build time via remark-html; 0 from user sources.""src/app/post/[id]/page.tsx:42 — dangerouslySetInnerHTML={{ __html: post.body }} where post.body is a Prisma row populated from req.body.content on POST /api/posts with no sanitizer".eval / new Function with user input: replace with an AST-based parser or lookup-table evaluator — there is no safe usage.