GDPR Article 28 requires that sub-processors (including error monitoring services) process data only as instructed by the controller and under a Data Processing Agreement. CWE-532 and OWASP A09 both cover sensitive data ending up in log outputs. Error monitoring services like Sentry, LogRocket, and Datadog are third parties with their own retention policies, access controls, and breach surfaces — often less stringent than your primary database. When a developer attaches the full AI messages array to a Sentry error for debugging convenience, they have just disclosed every user message in that request to a third party that was not disclosed in the privacy policy for that purpose.
High because attaching prompt content to error reports constitutes an undisclosed disclosure to an additional third-party sub-processor in violation of GDPR Art. 28, compounding the data exposure beyond the intended AI provider.
Sanitize error context before passing it to external error reporters. Log only metadata that helps debug the failure — not the content that caused it.
try {
const response = await openai.chat.completions.create({ model: 'gpt-4o', messages })
} catch (error) {
// Safe — only metadata:
Sentry.captureException(error, {
extra: {
context: 'AI generation failed',
model: 'gpt-4o',
messageCount: messages.length,
// NOT: prompt, messages, userInput, req.body
}
})
throw error
}
Audit every captureException, captureEvent, and addBreadcrumb call in your AI pipeline. Create a lint rule or code review checklist item that prohibits messages, prompt, or userInput as error reporter context keys.
ID: ai-data-privacy.third-party-ai-provider.no-user-data-error-reporting
Severity: high
What to look for: Enumerate every relevant item. Locate error handling catch blocks around AI API calls. Check whether the error reporting call (Sentry captureException, LogRocket captureException, Datadog error tracking, etc.) includes the original request payload, prompt, or messages array as additional context. Look for patterns like Sentry.captureException(error, { extra: { prompt, messages, userInput } }) or spreading the full request body into error context.
Pass criteria: At least 1 of the following conditions is met. Error reporting calls near AI API invocations do not include the prompt content, messages array, or user input as extra context. Only metadata like error type, model name, status code, and request ID are included in error reports.
Fail criteria: Error reporting calls explicitly pass the prompt, messages array, or user input as part of the error context — sending this data to the error monitoring service.
Skip (N/A) when: No external error reporting service (Sentry, LogRocket, Datadog, Bugsnag, etc.) is detected in package.json.
Detail on fail: "Error handler in [file] passes prompt/message content to [error service] — user input reaching the AI provider is also being sent to the error monitoring service"
Remediation: Error monitoring services are another third party with their own data retention policies. User prompt content should not travel beyond the AI provider — certainly not to debugging infrastructure.
Sanitize error context before reporting:
try {
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages
})
} catch (error) {
// Safe: only metadata
Sentry.captureException(error, {
extra: {
context: 'AI generation failed',
model: 'gpt-4o',
messageCount: messages.length,
// NOT: prompt, messages, userInput, requestBody
}
})
throw error
}
Review each error reporting call site in your AI pipeline and remove any that include full request payloads.