Skip to main content

Content policy is documented and accessible

ab-000710 · community-moderation-safety.policy-transparency.policy-documented
Severity: highactive

Why it matters

Without a documented content policy, users cannot know what the platform prohibits, moderators have no authoritative standard to apply consistently, and the platform has no defensible basis for enforcement actions. DSA Art. 14 requires platforms to document and publish their terms and conditions, including content restrictions. COPPA 16 CFR 312 further mandates transparency about moderation practices for platforms serving minors. A policy vague enough to read as 'be nice' cannot support consistent enforcement and exposes the platform to claims of arbitrary or biased moderation — a reputational and legal liability.

Severity rationale

High because the absence of a documented content policy creates a DSA compliance gap and eliminates the legal basis for consistent moderation enforcement.

Remediation

Create a public content policy page at src/app/community-guidelines/page.tsx that explicitly names prohibited content categories and the actions taken when they are violated. At minimum, the policy must cover:

  • Hate speech and discrimination
  • Harassment and targeted abuse
  • Spam and coordinated inauthentic behavior
  • Explicit or graphic violence
  • Illegal content

Document enforcement tiers: warning, content removal, temporary suspension, permanent ban. Link this page from the signup flow, the footer, and any moderation notification emails. A generic 'be nice' statement with no enumerated categories does not satisfy this check.

Detection

  • ID: community-moderation-safety.policy-transparency.policy-documented
  • Severity: high
  • What to look for: Look for a public content policy, terms of service, or community guidelines page. Check what's documented: prohibited content (hate speech, violence, adult content), user conduct rules, and enforcement actions.
  • Pass criteria: A public content policy page exists at a discoverable URL and outlines at least 3 prohibited content categories (e.g., hate speech, spam, harassment), user conduct expectations, and what happens when policies are violated. Count the number of distinct prohibited categories listed.
  • Fail criteria: No content policy is documented, or the policy is vague/non-existent. A generic "be nice" statement with no specific categories does not count as pass.
  • Skip (N/A) when: Never — transparency is essential.
  • Detail on fail: "No content policy page. Users don't know what's prohibited or what happens if they violate rules."
  • Remediation: Create a public content policy page at src/app/community-guidelines/page.tsx documenting prohibited content categories and enforcement actions.

External references

Taxons

History