Apple's guideline 1.2 requires apps with UGC to have "a mechanism to filter objectionable material from being posted." A report button with nowhere to send reports — no database table, no admin routes, no moderation queue — satisfies the UI requirement but fails the policy requirement. Reviewers checking for moderation infrastructure will find nothing, and the app gets rejected. Beyond review risk, an app that accepts user content with no ability to act on reports or ban users becomes a liability the moment objectionable content appears. Even a minimal implementation (reports table + admin screen + ban capability) satisfies Apple's requirement.
High because Apple guideline 1.2 mandates filtering mechanisms for UGC platforms — no backend moderation infrastructure means no compliance, and rejection follows.
Build a minimal moderation backend before submission. At minimum: a reports table, an admin endpoint, and user ban capability.
-- supabase/migrations/add_moderation.sql
create table content_reports (
id uuid primary key default gen_random_uuid(),
content_id text not null,
content_type text not null,
reporter_id uuid references users(id),
reason text not null,
status text default 'pending',
created_at timestamptz default now()
);
alter table users add column is_banned boolean default false;
For automated moderation, add a call to the OpenAI Moderation API (https://api.openai.com/v1/moderations) at content submission time — it's free for most volumes. Add an admin screen listing pending reports so your team can act on them.
ID: app-store-review-blockers.content-moderation.moderation-system-exists
Severity: high
What to look for: Count all relevant instances and enumerate each. Look for evidence of a moderation backend: a moderation table or collection, moderation-related API routes (/api/moderate, /api/admin/reports, /api/ban), third-party moderation service integrations (perspective-api, hivemoderation, sightengine, AWS Rekognition, OpenAI Moderation API, @azure/ai-content-safety), or an admin panel with moderation capabilities. In the codebase, search for moderation, ban, suspend, review-queue, content-policy in function names, route handlers, and database schema files.
Pass criteria: Evidence exists of a system — even basic — for reviewing and acting on reported content. At least 1 implementation must be verified. This can be a simple database table for reports + an admin view, an integration with a third-party moderation API, or automated filtering.
Fail criteria: UGC is present but no moderation infrastructure exists — no admin routes, no report storage, no third-party moderation service, no ban capability.
Skip (N/A) when: App has no user-generated content.
Detail on fail: "App accepts user posts with no moderation backend — reports have nowhere to go and no admin tooling exists to act on them"
Remediation: Apple's guideline 1.2 requires "a mechanism to filter objectionable material from being posted." A moderation system need not be complex to start.
https://api.openai.com/v1/moderationsReview the configuration in src/ or app/ directory for implementation patterns.