Platforms that host user-generated content without an Acceptable Use Policy have no enforceable basis to remove harmful content or terminate abusive accounts. The EU Digital Services Act (EU-DSA-2022/2065) requires platforms to publish clear and accessible terms on content restrictions and enforcement procedures. Without an AUP, moderating content becomes arbitrary and legally indefensible, and you have no recourse against users who post illegal content, harassment, or spam. Courts have found that platforms with no documented moderation policies are more liable for harmful content than those with enforced policies — not less.
Medium because the absence of an AUP exposes the platform to liability for user-generated content and prevents legally defensible content moderation.
Create an Acceptable Use Policy page at /acceptable-use (or add a dedicated section to your Terms of Service) and link to it from your footer and any content submission form.
The AUP must cover at minimum:
1. Prohibited Content — specific categories:
- Illegal content, hate speech, harassment, spam, malware, CSAM
- Impersonation of other users or your brand
2. Prohibited Actions — scraping, bypassing security, multi-account ban evasion
3. Enforcement — warning → content removal → suspension → permanent ban
4. Reporting — URL or email address for submitting violation reports
5. Appeals — whether users can appeal bans and how
Reference the AUP from your Terms of Service: "Your use of the Service is also subject to our Acceptable Use Policy."
ID: legal-pages-compliance.required-pages.aup-if-ugc
Severity: medium
What to look for: Enumerate every relevant item. First determine whether the application hosts user-generated content. Signals: comments section (Disqus, custom comment system, comments or posts table in DB schema), file uploads (multer, uploadthing, S3 SDK, Cloudinary), forums or community features, user profile pages with custom bios, review or rating systems, social features (follows, messages, feeds). If UGC features are present, check for an Acceptable Use Policy. It may be a standalone page (/acceptable-use, /aup, /community-guidelines, /guidelines) or a dedicated section within the Terms of Service. The AUP should specify: prohibited content types (hate speech, harassment, illegal content, spam, malware), consequences for violations (warning, suspension, termination), and the reporting mechanism for violations.
Pass criteria: At least 1 of the following conditions is met. If UGC features are present, an Acceptable Use Policy (or equivalent community guidelines) exists, is accessible without authentication, and covers prohibited behaviors, enforcement actions, and how to report violations.
Fail criteria: UGC features are present but no Acceptable Use Policy exists. An AUP exists but is so vague it provides no actual guidance (e.g., "Don't post anything illegal" with no further detail).
Skip (N/A) when: No UGC features detected. Application is purely user-facing with no ability to post, upload, comment, or otherwise contribute content that other users or the public can see.
Cross-reference: For user-facing accessibility and compliance, the Accessibility Basics audit covers foundational requirements.
Detail on fail: Specify what is missing. Example: "Comments feature and file upload system detected (uploadthing dependency, comments table in Prisma schema). No Acceptable Use Policy or community guidelines page found." or "Forum features present but AUP only states 'be respectful' — no prohibited content categories, no enforcement process, no reporting mechanism.".
Remediation: Add an Acceptable Use Policy if your application hosts user content:
Acceptable Use Policy — recommended structure:
1. Purpose — what this policy governs and who it applies to
2. Prohibited Content — specific list:
- Illegal content (copyright infringement, CSAM, illegal goods/services)
- Hate speech, harassment, threats, doxxing
- Spam, phishing, malware, deceptive content
- Impersonation (of other users or your brand)
- Content that violates others' intellectual property rights
3. Prohibited Actions — actions beyond just content:
- Scraping, automated access without permission
- Attempting to bypass security controls
- Creating multiple accounts to evade bans
4. Enforcement — what happens when someone violates the policy:
- Warning, content removal, account suspension, permanent ban
- Whether enforcement is at your sole discretion
5. Reporting — how users report policy violations (form URL, email address)
6. Appeals — whether banned users can appeal and how
Link your AUP from your Terms of Service, footer, and any content submission forms.