The FTC has flagged fully automated consequential decisions without explanation or recourse as unfair practices under FTC Act §5, and GDPR Article 22 creates a legal right for EU users to human review of automated decisions with significant effects. Algorithmic pricing that shows different prices to different users without disclosure, automated content moderation with no appeal path, and behavioral scoring that assigns service tiers without explanation each expose consumers to material harms they have no mechanism to identify or contest. The combination of FTC unfairness doctrine and GDPR Article 22 creates dual-jurisdiction liability for products with EU users.
Low because automated decision harms require the consumer to experience an adverse outcome and be unable to identify its source — multi-step harm that is real but not immediate — though the FTC's increasing focus on algorithmic systems raises the trajectory of enforcement risk.
Disclose automated decision systems in the privacy policy and provide a recourse path for consequential decisions.
// Show users why they see what they see
function RecommendationExplanation({ reason }: { reason: string }) {
return (
<p className="text-xs text-gray-500">
Recommended because: {reason}{' — '}
<button className="underline" onClick={() => showPreferenceSettings()}>
adjust your preferences
</button>
</p>
)
}
Add to src/app/privacy/page.tsx or equivalent:
## Automated Decision-Making
We use automated systems to [personalize recommendations / assign
service tiers / moderate content]. These systems use [your usage
patterns / your account type / reported content].
If you believe an automated decision affecting your account is
incorrect, contact support@example.com. We will review the decision
within [X business days].
For dynamic pricing: disclose in the privacy policy that prices may vary based on account or usage signals. GDPR Article 22 requires an opt-out or human review path for decisions with 'significant effects' on EU users.
ID: ftc-consumer-protection.ai-decisions.automated-decisions-explained
Severity: low
What to look for: Count all relevant instances and enumerate each. Identify automated decision systems in the codebase that have material effects on users: (1) algorithmic pricing that shows different prices to different users; (2) automated eligibility determinations (access tier assignment, loan or credit decisions, content moderation that results in account suspension); (3) recommendation systems that significantly influence user behavior or spending; (4) automated email suppression or communication frequency based on behavioral scoring. For each system found, check whether it is disclosed in the privacy policy or product documentation, and whether users have a mechanism to understand the decision or request human review. The FTC has increasingly flagged fully automated consequential decisions without explanation or recourse as unfair practices.
Pass criteria: Automated decisions that materially affect users are disclosed in the privacy policy or a dedicated "How it works" page. At least 1 implementation must be verified. Users can see the basis for an automated decision (e.g., "Your plan was assigned based on your usage during onboarding") or request a human review. Opt-out mechanisms exist where technically feasible.
Fail criteria: Consequential automated decisions (pricing segmentation, access tier assignment, content moderation) are implemented with no disclosure anywhere. No mechanism for users to understand or appeal automated decisions. Dynamic pricing shows different prices to different users with no disclosure that prices are personalized.
Skip (N/A) when: The application has no automated decision systems — no personalization, no algorithmic pricing, no automated eligibility or moderation decisions.
Detail on fail: Example: "Application uses behavioral scoring to assign users to different pricing tiers dynamically. This is not disclosed in the privacy policy or product documentation." or "Automated content moderation can suspend user accounts. No disclosure of moderation criteria or appeal process found." or "Recommendation algorithm significantly influences user purchasing decisions. No disclosure of how recommendations are generated."
Remediation: Disclose automated decision systems and provide recourse:
// Decision explanation component — show users why they see what they see
function RecommendationExplanation({ reason }: { reason: string }) {
return (
<p className="text-xs text-gray-500">
Recommended because: {reason}
{' — '}
<button
className="underline"
onClick={() => showPreferenceSettings()}
>
adjust your preferences
</button>
</p>
)
}
Privacy policy addition for automated decisions:
## Automated Decision-Making
We use automated systems to [describe the systems: personalize
recommendations / assign service tiers / moderate content]. These
systems use [brief explanation of signals: your usage patterns /
your account type / reported content].
If you believe an automated decision affecting your account is
incorrect, you can [contact support at support@example.com /
request a human review through your account settings]. We will
review the decision within [X business days].