Direct messages carry the highest reasonable expectation of privacy in any community platform. Routing them through an ML training pipeline without separate, explicit opt-in violates GDPR Art. 7 (consent must be specific to the purpose) and NIST AI RMF GOVERN 6.2 (data governance for AI systems). OWASP LLM06:2025 classifies unintended training data inclusion as a top LLM risk because message content can later surface in model outputs, exposing private conversations to other users. The sender's consent alone is insufficient — both sender and recipient must opt in before message content is processed beyond delivery.
Critical because using private messages as training data without consent breaches GDPR Art. 7 specificity requirements and can cause private conversation content to leak through model outputs.
Add a distinct consent record for message_ai_processing — separate from general analytics consent — and gate any ML pipeline access behind a runtime check for both conversation participants. In src/lib/messaging/pipeline.ts or equivalent:
async function shouldProcessMessageForML(senderId: string, recipientId: string): Promise<boolean> {
const [senderConsent, recipientConsent] = await Promise.all([
db.userConsent.findFirst({
where: { userId: senderId, processingType: 'message_ai_processing' },
orderBy: { consentedAt: 'desc' }
}),
db.userConsent.findFirst({
where: { userId: recipientId, processingType: 'message_ai_processing' },
orderBy: { consentedAt: 'desc' }
})
]);
return (senderConsent?.consentGiven ?? false) && (recipientConsent?.consentGiven ?? false);
}
If the platform never uses messages for ML, make that commitment explicit in the privacy policy and skip this check by documenting the exemption in code.
ID: community-privacy-controls.visibility.dm-training-consent
Severity: critical
What to look for: Enumerate every relevant item. Check whether platform documentation or terms of service state that direct messages may be used for AI training, content recommendations, or any algorithmic processing beyond secure storage and delivery. If yes, verify a separate explicit opt-in exists for this processing. Look for consent records specific to message use.
Pass criteria: At least 1 of the following conditions is met. If direct messages are used for any processing beyond delivery and storage, a separate, explicit opt-in was obtained from the user (both sender and recipient) before any such processing. Documentation clearly states what processing occurs and allows users to opt out.
Fail criteria: Direct messages are processed for AI training, content analysis, or recommendation algorithms without explicit opt-in. Privacy policy is unclear or states default inclusion in training.
Do NOT pass when: The item exists only as a placeholder, stub, or TODO comment — partial implementation does not count as passing.
Skip (N/A) when: Platform documentation explicitly states that direct messages are NEVER used for training or processing beyond secure delivery.
Cross-reference: For broader data handling practices, the Data Protection audit covers data lifecycle management.
Detail on fail: Describe the unauthorized processing. Example: "Privacy policy states 'messages may be used to improve AI features' with no opt-in mechanism found. Users cannot disable this processing." or "Message content sent to external ML platform for recommendation training without consent flow."
Remediation: Add explicit opt-in for message processing and ensure users understand the scope:
// In user settings/preferences
async function updateMessageProcessingConsent(userId: string, allowProcessing: boolean) {
await db.userConsent.upsert({
where: { userId_processingType: { userId, processingType: 'message_ai_processing' } },
create: {
userId,
processingType: 'message_ai_processing',
consentGiven: allowProcessing,
consentedAt: new Date(),
policyVersion: '1.0',
},
update: {
consentGiven: allowProcessing,
updatedAt: new Date(),
}
});
}
// Before any message processing for ML
const consent = await db.userConsent.findUnique({
where: { userId_processingType: { userId, processingType: 'message_ai_processing' } }
});
if (!consent?.consentGiven) {
// Skip message processing for this user
}