All 20 checks with why-it-matters prose, severity, and cross-references to related audits.
GDPR Art. 17 and CCPA §1798.105 grant users the right to erasure — setting an `is_deleted` flag satisfies neither. Name, email, profile picture, and bio left in the database after a deletion request remain queryable by anyone with direct database access, exploitable via SQL injection (CWE-212), and constitute a reportable breach under GDPR Art. 83. Regulators have levied fines for exactly this pattern: data nominally 'deleted' but fully intact in the store.
Why this severity: Critical because retained PII after deletion directly violates GDPR Art. 17 and exposes the platform to regulatory fines and breach liability if that data is subsequently leaked.
community-privacy-controls.visibility.erasure-completenessSee full patternGDPR Art. 7 requires that consent be freely given, specific, and unambiguous — a pre-checked box for analytics or a default-on email marketing enrollment fails all three criteria. Under CCPA §1798.120, users must be able to opt out of data sale at any time. Enabling non-essential processing before consent is collected exposes the platform to enforcement under GDPR Art. 83 (fines up to 4% of global turnover) and CCPA §1798.150 (statutory damages). The eprivacy Directive Art. 5(3) independently requires opt-in for tracking cookies, making this a multi-jurisdiction risk.
Why this severity: Critical because default-enabled non-essential processing violates GDPR Art. 7 and eprivacy Art. 5(3) simultaneously, with enforcement exposure in every EU jurisdiction.
community-privacy-controls.visibility.consent-explicitSee full patternDirect messages carry the highest reasonable expectation of privacy in any community platform. Routing them through an ML training pipeline without separate, explicit opt-in violates GDPR Art. 7 (consent must be specific to the purpose) and NIST AI RMF GOVERN 6.2 (data governance for AI systems). OWASP LLM06:2025 classifies unintended training data inclusion as a top LLM risk because message content can later surface in model outputs, exposing private conversations to other users. The sender's consent alone is insufficient — both sender and recipient must opt in before message content is processed beyond delivery.
Why this severity: Critical because using private messages as training data without consent breaches GDPR Art. 7 specificity requirements and can cause private conversation content to leak through model outputs.
community-privacy-controls.visibility.dm-training-consentSee full patternProfile visibility enforced only on the frontend is not enforced at all — any API client that bypasses the UI gets full profile data regardless of the user's privacy setting. OWASP A01:2021 (Broken Access Control) is the top web application vulnerability for exactly this reason. GDPR Art. 25 (privacy by design) requires access restriction to be a default, not an afterthought. A two-tier public/private model also fails users who want followers-only visibility, forcing them to choose between full exposure and full lockdown.
Why this severity: Medium because unauthorized profile data access requires a direct API call, but the exposure is complete — every field on the profile is returned — once the frontend enforcement is bypassed.
community-privacy-controls.visibility.visibility-tiers-enforcedSee full patternProfile-level privacy settings are a coarse control — users frequently want to share a public post while keeping their overall profile private, or archive a sensitive post while keeping their profile public. Without per-post visibility, every privacy decision is all-or-nothing. More critically, feed and search APIs that check profile visibility but not post visibility will surface restricted posts to anyone who queries the endpoint with a known user ID, violating GDPR Art. 25's data minimization principle (OWASP A01:2021).
Why this severity: Medium because the exposure is limited to posts on profiles that are public but contain individually-restricted content, but the bypass is trivially achieved through a direct API call.
community-privacy-controls.visibility.post-visibility-independentSee full patternA private profile that appears in site search or Google results is not private — it is merely hard to navigate to directly. Search engine indexing of private content breaches GDPR Art. 25's privacy-by-default requirement and CWE-200 (information exposure). Users who set their profiles to private expect complete exclusion from discovery, not just a locked profile page. Third-party search indices (Elasticsearch, Algolia) that ingest all records regardless of visibility are a particularly common oversight because indexing pipelines are often built before access control is finalized.
Why this severity: Medium because the exposure is indirect — private content appears in search results rather than being directly accessible — but it defeats the user's explicit privacy intent.
community-privacy-controls.visibility.search-respects-privacySee full patternGDPR Art. 20 (data portability) and CCPA §1798.100 both require that users receive a complete, machine-readable copy of their data upon request, within defined timeframes. An export that omits message history, follower lists, or activity logs is a partial response that fails compliance. Regulators specifically audit export completeness — the ICO and CNIL have both taken enforcement action against platforms that provided incomplete or unprocessable archives. Beyond compliance, denying users access to their own data creates lock-in and erodes trust.
Why this severity: High because incomplete or missing data export is a direct violation of GDPR Art. 20 and CCPA §1798.100, with documented regulatory enforcement history.
community-privacy-controls.data-rights.data-export-completenessSee full patternInstant, irreversible account deletion with a single confirmation click is a UX failure that creates real harm — users can accidentally delete years of content, and phishing attacks can trick users into triggering deletion. GDPR Art. 17 does not require immediate deletion; a grace period is both compliant and protective. Without a cooling-off window and email confirmation, there is no mechanism to detect unauthorized deletion requests (e.g., from a stolen session token) before the damage is permanent.
Why this severity: High because irreversible instant deletion with no grace period eliminates the only practical safeguard against accidental or malicious account removal.
community-privacy-controls.data-rights.deletion-cooling-offSee full patternDeleting a user record while leaving their posts and comments attributed to the original user ID or username is a partial erasure that violates GDPR Art. 17. The username on a comment thread is PII — it identifies the person even after account deletion. CWE-212 (improper cross-boundary removal of sensitive data) captures this exact failure mode. Inconsistent handling across content types — posts anonymized but comments not, or media files left with original paths — compounds the exposure.
Why this severity: High because content that retains original attribution after account deletion constitutes continuing PII exposure in violation of GDPR Art. 17, even if the user record itself was removed.
community-privacy-controls.data-rights.deleted-content-handlingSee full patternGDPR Art. 7(2) requires that consent be demonstrable — if challenged by a regulator or in litigation, you must prove when the user consented, to which policy version, and via which mechanism. A current-state boolean in a `user_settings` table provides none of this. GDPR Art. 5(2) (accountability principle) and SOC 2 CC2.3 both require documented audit trails. Without immutable consent records, every withdrawal or complaint triggers a 'our word against theirs' dispute that regulators resolve in the user's favor.
Why this severity: High because undemonstrable consent is legally equivalent to no consent under GDPR Art. 7(2), converting any disputed processing into an unlawful processing violation.
community-privacy-controls.data-rights.consent-recordsSee full patternGDPR Art. 7(3) is explicit: withdrawal of consent must be as easy as giving it, and processing must stop immediately upon withdrawal. A settings toggle that logs a withdrawal but continues sending events to analytics or ad networks is a phantom control — it creates the appearance of compliance while the violation continues. CCPA §1798.120 similarly requires that opt-out of data sale take effect within 15 business days. A backend that ignores the consent flag converts every post-withdrawal data collection event into a separate unlawful processing incident.
Why this severity: High because continued processing after documented withdrawal transforms a configuration issue into active, ongoing GDPR Art. 7(3) violations with each additional data collection event.
community-privacy-controls.data-rights.consent-withdrawalSee full patternData portability (GDPR Art. 20) entitles users to their own data — not other users' data. An export query that fetches entire conversation threads by conversation ID rather than by participant ID will return messages the requesting user received from, or in group threads containing, other users who did not request the export. This is a direct CWE-285 (improper authorization) and CWE-200 (information exposure) violation, and a data breach reportable under GDPR Art. 33 if it reaches the requesting user's device.
Why this severity: Medium because the scope of exposure is bounded to conversation participants whose messages the user had legitimate access to read, but the export still violates their right to control their own data copy.
community-privacy-controls.data-rights.export-excludes-private-messagesSee full patternAn unthrottled data export endpoint is an unauthenticated bulk-data harvesting vector. A malicious actor with a valid session token — or a compromised account — can trigger unlimited export jobs, each generating a complete archive of user PII. CWE-770 (allocation of resources without limits) applies directly. Even without malicious intent, runaway export requests can exhaust database connections and background job queues. OWASP A05:2021 (Security Misconfiguration) includes absent rate limiting on sensitive endpoints.
Why this severity: Low because exploitation requires an authenticated session, but the potential damage — bulk export of an entire account's PII with no throttle — justifies the control.
community-privacy-controls.data-rights.export-rate-limitedSee full patternBehavioral data — browsing history, content preferences, inferred demographics — is among the most sensitive data a community platform holds. Sharing it with ad networks (Google Ads, Meta Pixel, TikTok) without explicit opt-in violates GDPR Art. 6 (no lawful basis), GDPR Art. 7 (no consent), CCPA §1798.120 (right to opt out of sale/sharing), and eprivacy Art. 5(3) (tracking without consent). Regulators have issued nine-figure fines for exactly this pattern. A Facebook Pixel firing on page load without consent is not an implementation detail — it is an ongoing violation for every page view.
Why this severity: Medium because the data leaves the platform boundary on every affected page load, but the immediate harm requires the third party to act on the data rather than causing direct account compromise.
community-privacy-controls.consent.ad-targeting-opt-inSee full patternGDPR Art. 13 requires that users be informed at collection time which third parties receive their data and for what purpose. A privacy policy that says 'we use analytics partners' without naming Google Analytics, Amplitude, or Stripe does not meet this standard. GDPR Art. 28 requires a data processing agreement with each processor — you cannot have a DPA with a vendor you haven't disclosed. CCPA §1798.100(a) requires disclosure of data categories and recipients. A gap between disclosed and actually-integrated services is the most common finding in regulatory audits.
Why this severity: Medium because the harm is indirect — undisclosed sharing doesn't cause immediate data loss — but a single regulator request for the third-party list turns the omission into a documented compliance failure.
community-privacy-controls.consent.third-party-disclosureSee full patternOAuth app grants that cannot be individually revoked give third-party applications permanent access to a user's account unless the user changes their password — a drastic action that breaks all legitimate sessions. GDPR Art. 7(3) requires withdrawal of consent to be as easy as giving it; if consent was given by authorizing an OAuth app, revocation must be available at the same granularity. CWE-285 and OWASP A01:2021 classify unrevocable authorization grants as access control failures. A user who suspects an app is misusing their data has no recourse short of a full password reset.
Why this severity: Low because exploitation requires a third-party app to already have a valid grant, but permanent irrevocable access grants violate GDPR Art. 7(3) and create lasting exposure if an app is compromised.
community-privacy-controls.consent.oauth-app-revocationSee full patternActivity status — 'last seen 3 minutes ago', 'currently online', 'typing' — reveals real-time behavioral patterns that users may want to keep private for safety or personal reasons. Domestic abuse victims, people avoiding specific contacts, and users who want to read messages without triggering social pressure are all impacted by forced activity disclosure. GDPR Art. 25 (privacy by design) requires that data minimization be the default: broadcast activity only when users have affirmatively chosen to share it. CWE-200 captures the information exposure when status fields are included in API responses regardless of user preference.
Why this severity: Low because the exposed data is behavioral metadata rather than account credentials or PII, but the real-time nature makes it a safety concern for vulnerable users.
community-privacy-controls.account-control.activity-status-toggleSee full patternA public follower list reveals a user's social graph to anyone who calls the API — who they follow exposes interests and associations; who follows them can reveal community membership. Users may have legitimate reasons to keep this information private: they may be following accounts related to health conditions, political views, or personal situations they haven't disclosed publicly. GDPR Art. 25's data minimization principle requires that social graph data not be exposed beyond what the user has authorized. OWASP A01:2021 applies when the API returns this data without checking visibility settings.
Why this severity: Low because the exposed data is relational metadata rather than directly sensitive PII, but social graph visibility can enable targeted harassment or out users in sensitive categories.
community-privacy-controls.account-control.follower-list-privacySee full patternPrivacy settings that users cannot find or understand are functionally absent. GDPR Art. 7(3) requires withdrawal of consent to be straightforward — a single flat list of 40 settings with technical labels like 'behavioral_targeting_v2' does not meet that standard. Poorly organized settings also increase support burden and erode trust: users who cannot find a control assume the platform is hiding it. GDPR Art. 25 (privacy by design) requires that controls be accessible and meaningful, not merely present.
Why this severity: Low because disorganized settings don't expose data directly, but they systematically prevent users from exercising rights they are legally entitled to exercise, compounding every other privacy control gap.
community-privacy-controls.account-control.settings-organizationSee full patternGDPR Art. 5(1)(e) (storage limitation) and CCPA §1798.100(a) require that data is not retained longer than necessary for its stated purpose. A privacy policy that documents retention periods but has no automated enforcement is a compliance promise the system cannot keep — activity logs, failed login records, and temporary uploads accumulate indefinitely. ISO 27001:2022 A.8.10 and NIST SP 800-53r5 SI-12 both require defined and enforced retention schedules. Regulators treat undeclared or unenforced retention as evidence of systemic disregard for data minimization.
Why this severity: Low because indefinite retention causes harm only if the data is later breached or subpoenaed, but the regulatory exposure from documented-but-unenforced retention periods is direct and demonstrable.
community-privacy-controls.account-control.retention-policySee full patternRun this audit in your AI coding tool (Claude Code, Cursor, Bolt, etc.) and submit results here for scoring and benchmarks.
Open Community Privacy & Controls Audit