All 19 checks with why-it-matters prose, severity, and cross-references to related audits.
Apple guideline 4.2 (Minimum Functionality) and Google Play's Spam policy exist specifically to block thin WebView wrappers masquerading as native apps. A single-screen app that renders an external URL full-screen provides no offline capability, no native hardware integration, and no experience a mobile browser cannot already deliver — reviewers are trained to spot this pattern and reject on first pass. Beyond the immediate rejection, an app that clears this bar only by the thinnest margin faces re-rejection on future updates whenever guidelines tighten.
Why this severity: Critical because a thin-wrapper app will be rejected outright on first review, wasting weeks of submission time and potentially triggering heightened scrutiny on subsequent submissions from the same developer account.
app-store-policy-compliance.app-quality.genuine-utilitySee full patternApple guideline 4.3 and Google Play's Repetitive Content policy target apps that dilute store quality without adding value — commodity utilities (tip calculators, flashlights, unit converters) with no differentiator, and near-duplicate submissions from the same developer. Beyond rejection, being flagged as spam can result in removal of existing apps from the same developer account. The store algorithms also deprioritize apps from accounts with spam history, reducing organic discoverability even after policy issues are resolved.
Why this severity: High because spam and near-duplicate flags attach to the developer account, not just the single submission, and can suppress discoverability or accelerate future rejections across the entire account.
app-store-policy-compliance.app-quality.no-spam-duplicateSee full patternApple guideline 4.1 and Google Play's Impersonation policy go beyond trademark law — they apply even when the developer has no intent to deceive. An app name containing a brand prefix like 'Insta-' or an icon that shares a color palette and shape language with a recognizable competitor will be rejected on visual similarity alone. Downstream, trademark holders can file DMCA-equivalent takedowns even after an app passes initial review, resulting in removal from the store and potential legal exposure under CWE-1021 (Improper Restriction of Rendered UI Layers).
Why this severity: Critical because impersonation violations trigger immediate rejection, can escalate to developer account termination, and expose the developer to trademark infringement litigation independent of the store decision.
app-store-policy-compliance.app-quality.no-impersonationSee full patternApple guideline 5.2 and Google Play's Intellectual Property policy treat IP violations as grounds for immediate removal — not just rejection. Fonts downloaded from DaFont are frequently personal-use only; stock photos from Freepik often require a paid commercial license; sports team logos and entertainment character art almost always require a specific licensing agreement. Beyond store policy, bundling unlicensed commercial assets is a legal liability: the SPDX license compliance framework exists precisely because IP owners actively monitor app stores and issue takedowns. An app can ship, rank, and accumulate users, then be removed and sued without warning.
Why this severity: High because IP violations trigger retroactive removal after approval, not just rejection, and create legal liability independent of the store enforcement action.
app-store-policy-compliance.app-quality.ip-rightsSee full patternApple guideline 1.1 and Google Play's Dangerous Products policy apply zero-tolerance enforcement — there is no editorial exemption for context or intent when the content provides step-by-step instructions for manufacturing weapons, drugs, or explosives. For apps using AI assistants, this extends to the system prompt: a prompt that removes safety constraints (OWASP LLM02 — Insecure Output Handling) can produce policy-violating output on demand, and the developer is held responsible. A single user report of harmful content in an AI-assisted app is enough to trigger expedited review and removal.
Why this severity: High because dangerous content or an unguarded AI system prompt will result in immediate removal and potential developer account termination — both stores apply zero-tolerance enforcement regardless of context or stated intent.
app-store-policy-compliance.content-restrictions.no-dangerous-contentSee full patternApple guideline 5.1.3 and Google Play's Health Apps policy, reinforced by FTC health claims regulations, prohibit unsubstantiated diagnostic and treatment claims — 'clinically proven to reduce anxiety' or 'detects your condition' require verifiable regulatory clearance (FDA 510(k), CE mark) that the vast majority of wellness apps do not have. Beyond rejection, the FTC actively pursues health claim enforcement actions against app developers, with civil penalties in the millions. A missing disclaimer on a health screen also creates tort liability if a user delays seeking medical care based on the app's output.
Why this severity: High because unsubstantiated health claims trigger Apple rejection plus FTC enforcement exposure, and the absence of a medical disclaimer creates independent tort liability if users rely on the app's output for clinical decisions.
app-store-policy-compliance.content-restrictions.health-claimsSee full patternApple guideline 5.6 and Google Play's Deceptive Behavior policy treat deception as grounds for permanent developer account termination, not just rejection. Fake virus or battery alerts (apps cannot access OS-level security state on iOS or Android), claims of phone-cleaning capabilities (impossible in a sandboxed app), and dark-pattern subscription cancellation flows (deliberate friction to prevent unsubscribing) are among the most scrutinized patterns in app review. Under OWASP A05 (Security Misconfiguration) and CWE-451 (UI Misrepresentation), dark patterns that obscure subscription terms also create legal exposure under FTC regulations and EU Digital Services Act enforcement.
Why this severity: Critical because deceptive behavior findings result in permanent developer account termination — not just rejection — and the FTC and EU DSA both actively pursue dark-pattern enforcement against subscription apps.
app-store-policy-compliance.content-restrictions.no-deceptive-behaviorSee full patternBoth Apple's Generative AI guidelines (2024) and Google Play's AI-generated content policy require that AI output be moderated before display, and that synthetic media of real people carry explicit consent and disclosure. OWASP LLM02 (Insecure Output Handling) and NIST AI RMF GOVERN-1.7 classify direct pass-through of LLM output to users as a governance failure. Practical consequences: a deepfake face-swap app without age verification will be rejected on first review; an AI chat that produces unmoderated harmful content will be removed after a user report regardless of the developer's intent.
Why this severity: Medium because unmoderated AI output expands the app's attack surface to include any content an LLM can be induced to produce, while deepfake generation without consent mechanisms creates legal liability that persists after store removal.
app-store-policy-compliance.content-restrictions.ai-content-moderatedSee full patternUS Truth in Lending Act (TILA), FINRA disclosure rules, and both Apple guideline 3.1.5 and Google Play's Financial Services policy require specific, verifiable disclosures before a user commits to a financial product — investment risk warnings, APR disclosure on loans, and broker-dealer registration confirmations. Missing these is not a gray area: FINRA and the CFPB actively pursue enforcement against app developers, and civil penalties start in the tens of thousands per violation. An investment app missing 'past performance does not guarantee future results' or a lending app missing APR disclosure can be rejected, removed, and fined independently.
Why this severity: High because financial regulation violations create legal liability with civil penalties that exist independently of the app store enforcement action, and regulators can pursue the developer directly without involving Apple or Google.
app-store-policy-compliance.regulated-industries.financial-complianceSee full patternApple guideline 7.3, Apple guideline 3.1.1 (loot box odds), Google Play's Real-Money Gambling policy, and COPPA §312.5 (child protection) all apply simultaneously to gambling apps — and failures in any one are grounds for immediate rejection. Age verification gaps are the most common failure: a casino screen accessible without a date-of-birth gate will be caught by automated review tools in both stores. Loot boxes linked to real-money purchases without disclosed odds are a specific named violation in Apple's 3.1.1, and sweepstakes without posted official rules violate US law in 48 states independently of store policy.
Why this severity: High because gambling policy violations are a permanent ban risk for the developer account, and operating real-money gambling without the specific Apple entitlement or Google Play program approval means the app cannot legally run in either store regardless of code quality.
app-store-policy-compliance.regulated-industries.gambling-complianceSee full patternApple guideline 5.1.3 (HealthKit) and Google Play's Health Connect policy prohibit transmitting health data to advertising or analytics platforms — a prohibition that is routinely violated by apps that include general analytics SDKs like Mixpanel or Amplitude without filtering health data from event properties. GDPR Article 9 classifies health data as a special category requiring explicit consent separate from general ToS acceptance. HIPAA §164.514 applies if the app is used in a clinical context. Over-permissioning HealthKit data types (requesting menstrual cycle data for an app that has no tracking feature) is flagged automatically by Apple's review tooling.
Why this severity: Medium because health data policy violations trigger Apple rejection and can simultaneously create GDPR Article 9 exposure in the EU — two independent enforcement regimes with separate penalties and separate legal bases for action.
app-store-policy-compliance.regulated-industries.health-medical-complianceSee full patternApple guideline 5.4 and Google Play's VPN Service policy require that VPN apps use platform NetworkExtension/VpnService APIs exclusively, with no private API usage — and prohibit collecting user network traffic payloads for any purpose. A VPN that logs DNS queries or packet data to an analytics server violates GDPR Article 5 (data minimization) and CWE-311 (Missing Encryption of Sensitive Data) simultaneously. The business risk is heightened: Apple requires advance entitlement approval for Network Extension, so an app submitted without that approval will be rejected before reviewers even open it.
Why this severity: Low because VPN apps in compliance are straightforward to approve, and the required entitlements are obtainable through the standard developer portal — the severity reflects that violations are uncommon when the developer is following platform documentation.
app-store-policy-compliance.regulated-industries.vpn-complianceSee full patternApple has tightened accessibility enforcement since iOS 17, and WCAG 2.2 SC 1.1.1 (non-text content) and SC 1.4.4 (resize text) apply to mobile apps under Section 508 §502.3.1 for any app serving government contracts or enterprise clients. `allowFontScaling={false}` is the single most common AI-generated accessibility anti-pattern: the AI produces it to prevent layout overflow, but it globally disables iOS Dynamic Type, making the app unusable for the 20% of iOS users who increase their system font size. VoiceOver users with zero `accessibilityLabel` props encounter an app where the screen reader reads raw component type names — 'button', 'button', 'image' — with no navigable context.
Why this severity: Medium because total absence of accessibility labels and disabled font scaling make the app functionally unusable for screen reader users, which Apple increasingly flags in review, and which creates legal exposure under ADA and Section 508 for enterprise distribution.
app-store-policy-compliance.platform-standards.accessibilitySee full patternApple guideline 4.2 requires that universal apps look great on all supported device sizes — a stretched iPhone layout on iPad is a named rejection reason, not a judgment call. The ISO 25010 portability.adaptability metric applies directly here: an app that renders a 375px-wide phone layout on a 1024px iPad screen fails adaptability by definition. This is especially common in AI-generated React Native code that uses `Dimensions.get('window').width` as a static constant at module load time rather than as a reactive value — the measurement is correct on first launch but does not respond to device rotation, split-screen mode, or Stage Manager on iPad.
Why this severity: Medium because iPad layout failures are caught by automated tools during review and require a code change (not just a metadata fix), adding a full review cycle to the submission timeline.
app-store-policy-compliance.platform-standards.multi-device-supportSee full patternApple guideline 3.1.1 explicitly prohibits iOS apps from containing UI elements that link to or promote competing app stores. Google Play mirrors this restriction for Android. The violation is usually not malicious — it originates from a shared React Native or Flutter codebase where a developer adds a cross-platform promotional banner without platform-gating it. But reviewers catch it systematically: a hardcoded `play.google.com` link in an iOS build is a trivial pattern match that triggers rejection with minimal reviewer judgment.
Why this severity: Low because the fix is mechanical once found — remove or platform-gate the cross-store link — and the violation is caught immediately in review rather than causing downstream user harm.
app-store-policy-compliance.platform-standards.no-competitor-mentionsSee full patternApple guideline 2.1 requires that submitted apps be complete and production-ready — 'Beta Features' tab labels, '0.9.x' version numbers, and visible debug overlays signal to reviewers that the app is not ready for the store. This is one of the most mechanical rejection causes: automated pre-review scanning flags `0.x` version numbers and common pre-release strings before a human reviewer sees the app. The `allowFontScaling={false}` anti-pattern from AI code generation frequently co-occurs with beta labels for the same reason: the AI scaffolds placeholder-quality code that ships to review without cleanup.
Why this severity: Low because pre-release language is easy to remove once identified and causes no user harm — but it wastes a full submission cycle and, if repeated, trains the reviewer team to scrutinize future submissions more aggressively.
app-store-policy-compliance.platform-standards.no-beta-languageSee full patternApple guideline 4.2 applies heightened scrutiny to apps in saturated categories — to-do lists, flashlights, weather apps, meditation apps, Bible/scripture apps, and daily quote apps collectively have hundreds of thousands of existing submissions. An app entering one of these categories without a clear technical differentiator in the codebase faces a higher probability of minimum-functionality rejection on borderline signals that would otherwise be tolerated. This is informational because the category alone does not cause rejection — but it amplifies the impact of every other borderline finding in this audit.
Why this severity: Informational because saturated-category positioning is a risk multiplier, not a direct violation — it raises the bar for every other check in this audit rather than constituting a standalone rejection reason.
app-store-policy-compliance.risk-indicators.saturated-categorySee full patternApple issued explicit guidance in 2023–2024 that apps must provide lasting value beyond a thin AI API wrapper. An app consisting of a text input, an LLM API call, and a text output display — with no system prompt specialization, no caching, no offline capability, and no native device integration — is functionally equivalent to accessing the underlying model through a web browser. Both stores are actively scrutinizing AI apps under this lens. NIST AI RMF MAP-1.5 classifies thin-wrapper deployment as a governance gap because the developer has minimal insight into or control over the AI's behavior relative to the app's stated purpose.
Why this severity: Informational because a thin wrapper is not an automatic rejection trigger, but it places the app under heightened review scrutiny and amplifies the impact of any other borderline finding — especially content moderation gaps in the AI pipeline.
app-store-policy-compliance.risk-indicators.ai-wrapper-scrutinySee full patternApple and Google maintain separate review tracks and documentation requirements for regulated domains: financial services, healthcare and medical devices, legal services, gambling, alcohol and tobacco, adult content, and firearms accessories. Operating in any of these domains means that all related regulated-industry checks in this audit carry blocking weight — a high-severity failure there is not a borderline case but a certain rejection. Reviewers in regulated categories may request documentation (regulatory licenses, age verification certification, data processing agreements) that can extend review timelines from days to weeks. GDPR Article 35 additionally requires a Data Protection Impact Assessment for high-risk data processing, which regulated-domain apps typically trigger.
Why this severity: Informational because operating in a regulated domain is not itself a violation — it is a signal that every other regulated-industry finding in this audit must be treated as a blocking issue before submission, and that documentation preparation is part of the release checklist.
app-store-policy-compliance.risk-indicators.regulated-business-modelSee full patternRun this audit in your AI coding tool (Claude Code, Cursor, Bolt, etc.) and submit results here for scoring and benchmarks.
Open App Store Policy & Content Compliance Audit