The FTC's 2024 AI guidance distinguishes between AI as a writing tool and AI-generated content that would be material to a consumer's decision — such as a founder letter, an expert opinion, or personalized outreach implying individual human judgment. A blog post attributed to a named human author but generated by an LLM, or a CEO voice email campaign written by AI, deceives consumers about the source of the advice they are relying on. Where authorship is a credibility signal, the FTC Act §5 deception standard applies to AI authorship just as it does to fabricated human authorship.
Low because AI authorship deception requires consumers to make decisions based on implied human expertise they would not have followed if they knew the source — the harm is real but depends on the weight the consumer placed on the implied authorship.
Add an AI-assistance disclosure to any content where authorship affects credibility, and restructure AI-written email campaigns to avoid false first-person voice.
function BlogPostMeta({ post }: { post: Post }) {
return (
<div className="text-sm text-gray-600">
<span>By {post.author}</span>
{post.aiAssisted && (
<span className="ml-3 bg-gray-100 px-2 py-0.5 rounded text-xs">
Written with AI assistance
</span>
)}
<span className="ml-3">{formatDate(post.publishedAt)}</span>
</div>
)
}
// AI-generated mass email — avoid false individual voice
// AVOID: "I personally wanted to reach out to you, [name]..."
// (when the email is AI-generated to thousands of users)
//
// PREFER: "The [Product] team wanted to share..." or a clearly
// branded automated format that does not imply individual
// human authorship
The test: would a reasonable consumer make a different decision if they knew the content was AI-generated? If yes, disclose it. Commodity copy — FAQs, error messages, standard product descriptions — does not require disclosure where authorship is not a credibility factor.
ID: ftc-consumer-protection.ai-decisions.ai-content-disclosed
Severity: low
What to look for: Count all relevant instances and enumerate each. Look for LLM API usage in content generation pipelines that produce consumer-facing text. Specifically: (1) blog posts generated by AI and published without disclosure; (2) product descriptions, help articles, or comparison pages drafted by AI and presented as expert human writing; (3) email campaigns generated by AI in a personalized voice that implies direct human authorship ("I personally want to share..."); (4) AI-written social proof content. The FTC's evolving guidance on AI distinguishes between AI as a writing aid (where disclosure is not always required) and AI-generated content that would be material to a consumer's decision — such as an AI-written expert opinion, an AI-impersonated founder letter, or AI-generated personalization that appears to be human judgment. Check: does the AI-generated content appear in a context where a consumer would care that it was AI-generated rather than written by a human expert or the actual person implied?
Pass criteria: AI-generated content that could be material to a consumer's decision (expert reviews, founder letters, personalized recommendations framed as human judgment) is either disclosed or written in a way that does not imply human authorship. At least 1 implementation must be verified. Blog posts generated primarily by AI include a disclosure ("Written with AI assistance" or "AI-generated") if they are presented as expert human opinion.
Fail criteria: AI-generated blog posts are attributed to a named human author without disclosure. AI-drafted email campaigns use first-person language implying a human wrote the specific message to that user. AI-generated "expert opinions" appear without disclosure in a context where authorship matters.
Skip (N/A) when: The application uses no AI to generate static or asynchronous consumer-facing content (blog posts, product descriptions, email campaigns, landing page copy). AI chatbots and conversational interfaces are explicitly excluded from this check — they are covered by no-deceptive-ai-personas. This check concerns AI authorship of content artifacts, not AI-powered conversations.
Detail on fail: Example: "Blog posts generated by OpenAI API are published under a named author's byline with no AI assistance disclosure." or "Monthly email campaign uses GPT-4 to write personalized outreach in the CEO's voice ('Hi, I wanted to personally reach out...') with no disclosure that the message is AI-generated." or "Product comparison page claims to be 'written by our expert team' but is generated by LLM with no human review or disclosure."
Remediation: Add disclosure to AI-generated content where authorship is material:
// Blog post with AI assistance disclosure
function BlogPostMeta({ post }: { post: Post }) {
return (
<div className="text-sm text-gray-600">
<span>By {post.author}</span>
{post.aiAssisted && (
<span className="ml-3 bg-gray-100 px-2 py-0.5 rounded text-xs">
Written with AI assistance
</span>
)}
<span className="ml-3">{formatDate(post.publishedAt)}</span>
</div>
)
}
// For AI-generated emails — use a clear sender voice
// AVOID: "I personally wanted to reach out to you, [name]..."
// (when the email is AI-generated to thousands of users)
//
// PREFER: "The [Product] team wanted to share..."
// or use a clearly branded automated email format
// that does not imply individual human authorship
The test: would a reasonable consumer make a different decision if they knew the content was AI-generated? If yes, disclose it. If the content is commodity copy (standard product description, error messages, FAQs) where authorship is not material, disclosure is not required.