The FTC's 2024 Fake Reviews Rule explicitly treats AI-generated reviews the same as any other fabricated review — the generation method does not create a carve-out. An AI-generated review summary displayed in a 'What our customers are saying' section, without disclosure, leads consumers to believe they are reading distilled human opinion when they are reading AI output. AI-generated placeholder testimonials left on a landing page compound the violation by fabricating the appearance of an actual user base. Both scenarios create direct FTC Fake Reviews Rule liability.
Medium because AI-generated review content presented as human opinion is treated as fabricated reviews under the FTC's 2024 Fake Reviews Rule, but the harm depends on whether consumers act on the AI output as if it reflected real user experience.
Label all AI-generated content in review and testimonial contexts at the point it appears — not in a site-wide disclaimer.
function AIReviewSummary({ summary, reviewCount }: {
summary: string
reviewCount: number
}) {
return (
<div className="bg-gray-50 rounded p-4">
<div className="flex items-center gap-2 mb-2">
<span className="text-xs bg-purple-100 text-purple-700 px-2 py-0.5 rounded">
AI summary
</span>
<span className="text-xs text-gray-500">
Generated from {reviewCount} verified customer reviews
</span>
</div>
<p>{summary}</p>
<p className="text-xs text-gray-500 mt-2">
This summary was generated by AI. Individual reviews may differ.
</p>
</div>
)
}
Never use AI to generate testimonials presented as real customer experiences — remove them before any user could see them. The FTC Fake Reviews Rule does not distinguish between human-fabricated and AI-fabricated reviews.
ID: ftc-consumer-protection.endorsement-disclosure.ai-generated-content-disclosed
Severity: medium
What to look for: Count all relevant instances and enumerate each. Search for AI content generation in review or testimonial pipelines. Look for: (1) LLM API calls (OpenAI, Anthropic, Cohere) in code paths that write to a reviews or testimonials table; (2) AI-generated review summaries displayed alongside or instead of individual user reviews (common in e-commerce — "AI summary of 500 reviews"); (3) "example" or "sample" testimonials generated by AI to populate new landing pages before real reviews exist; (4) AI-generated success stories or case studies presented as if they describe real customer experiences. The FTC's 2023 enforcement guidance and 2024 AI rules specifically address the need to disclose when content that appears to reflect human experience or opinion was generated by AI.
Pass criteria: Any AI-generated content in a review or testimonial context is clearly labeled as AI-generated. At least 1 implementation must be verified. AI review summaries are labeled as such (e.g., "AI-generated summary of user reviews"). Testimonial sections do not contain AI-generated placeholder quotes presented as real user experiences.
Fail criteria: AI-generated review summaries appear without disclosure, in a context where a consumer would assume they represent real user opinion. AI-generated example testimonials are displayed on landing pages without disclosure. An LLM writes to the testimonials table without the output being labeled as AI-generated.
Skip (N/A) when: The application uses no AI in any content generation pipeline related to reviews, testimonials, or user-experience narratives. AI chatbots and conversational assistants are out of scope for this check — they are covered by no-deceptive-ai-personas. Only skip if there is no AI that generates or summarizes text in a review/testimonial context.
Detail on fail: Example: "Product page shows 'What our customers are saying' section with an AI-generated summary from OpenAI. No disclosure that the summary is AI-generated rather than a real customer quote." or "Onboarding flow shows 'example' testimonials generated by GPT-4 to demonstrate the product — displayed without disclosure." or "LLM API call in src/api/testimonials/generate.ts writes AI-generated text directly to testimonials table with no label."
Remediation: Label all AI-generated content in review and testimonial contexts:
// AI review summary — clearly labeled
function AIReviewSummary({ summary, reviewCount }: {
summary: string
reviewCount: number
}) {
return (
<div className="bg-gray-50 rounded p-4">
<div className="flex items-center gap-2 mb-2">
<span className="text-xs bg-purple-100 text-purple-700 px-2 py-0.5 rounded">
AI summary
</span>
<span className="text-xs text-gray-500">
Generated from {reviewCount} verified customer reviews
</span>
</div>
<p>{summary}</p>
<p className="text-xs text-gray-500 mt-2">
This summary was generated by AI based on user reviews.
Individual reviews may differ.
</p>
</div>
)
}
Never use AI to generate testimonials or case studies that are presented as real human experiences, even as placeholder content — remove them before any user could see them. The FTC treats AI-generated "fake reviews" the same as any other fabricated reviews.