GDPR Article 7 requires that consent be as specific as the processing activity — a blanket "I agree to AI" toggle does not meet the granularity standard when your product uses AI for distinct purposes (writing assistance, history saving, analytics, model training opt-in). CCPA §1798.120 gives consumers the right to opt out of specific categories of data sale or sharing. A single on/off switch forces users into an all-or-nothing choice, reducing trust and limiting your ability to demonstrate specific lawful bases for each processing activity.
Medium because a monolithic AI toggle limits compliance defensibility under GDPR Art. 7 and erodes user trust, but does not by itself enable unauthorized data access.
Design AI settings with distinct, labeled controls for each processing activity. Store them as separate boolean fields so each has its own consent record.
// types/user-settings.ts
interface UserAiSettings {
ai_chat_enabled: boolean // AI responds to messages
ai_history_saved: boolean // Conversation history persisted
ai_usage_analytics: boolean // Usage included in aggregate analytics
ai_training_opt_out: boolean // Opt out of provider model training
}
Present these in a dedicated "AI & Privacy" settings section with plain-language descriptions — not legal prose — explaining what each toggle controls and what stops when it is disabled. Log the timestamp of each change for your consent audit trail.
ID: ai-data-privacy.data-collection-consent.granular-ai-opt-in
Severity: medium
What to look for: Enumerate every relevant item. Examine user settings schema and settings UI components. Look for AI-specific settings beyond a single global toggle. Signals: multiple boolean fields prefixed with ai_ in the database schema or settings type definitions, a settings page section specifically for AI preferences, or feature-flag configuration that exposes distinct AI capabilities to the user.
Pass criteria: At least 1 of the following conditions is met. User settings include distinct toggles for individual AI features (e.g., separate controls for AI-assisted writing, AI conversation history saving, AI usage analytics) rather than a single monolithic on/off switch.
Fail criteria: The only AI-related user control is a single global enable/disable, or there are no AI-specific settings at all — the feature is always on with no user control.
Skip (N/A) when: The application has exactly one AI feature with a single clear purpose and scope, in which case one toggle is appropriate. Or no user-accessible settings UI is detected.
Cross-reference: For broader data handling practices, the Data Protection audit covers data lifecycle management.
Detail on fail: "User settings schema contains only a single AI toggle — no granular controls for distinct AI features or data sharing preferences"
Remediation: Users should be able to consent to specific AI capabilities rather than accepting all AI data processing as a package. This is both a best practice for trust and increasingly a regulatory expectation.
Design settings with clear categories:
interface UserAiSettings {
ai_chat_enabled: boolean // AI responds to messages
ai_history_saved: boolean // Save conversation history
ai_usage_analytics: boolean // Include my usage in aggregate analytics
ai_training_opt_out: boolean // Opt out of model training (if applicable)
}
Present these in a dedicated "AI & Privacy" settings section with plain-language descriptions of what each toggle controls.