GDPR Art. 35 mandates a Data Protection Impact Assessment before commencing high-risk processing — this is not a retrospective exercise. High-risk processing categories include systematic user profiling, automated decision-making with legal effects, large-scale processing of special category data (health, biometrics, religion), and behavioral scoring. AI-built apps are especially prone to inadvertent DPIA triggers: recommendation engines, usage-based pricing tiers, and sentiment analysis of user content can all qualify. ISO-27001:2022 A.5.34 requires organizations to assess privacy risks before implementing new processing. Proceeding with high-risk processing without a DPIA is an Art. 35 violation even if no harm occurs.
Low because the obligation only activates for high-risk processing activities, but when triggered, missing a DPIA is a direct Art. 35 violation that supervisory authorities can act on before any breach occurs.
Identify whether any current or planned features trigger GDPR Art. 35 (profiling, automated decisions, special category data at scale). For each that does, document a DPIA before enabling the feature in production.
# DPIA: [Processing Activity] — e.g., "Behavioral Profiling for Feature Recommendations"
## 1. Description of Processing
What data, purpose, means, duration.
## 2. Necessity and Proportionality
Why necessary? Less privacy-invasive alternative considered?
## 3. Risks to Data Subjects
| Risk | Likelihood | Impact | Level |
|-------------------------|------------|--------|--------|
| Inaccurate profiling | Medium | High | High |
| Behavior data breach | Low | High | Medium |
## 4. Mitigations
| Risk | Mitigation | Owner | Status |
|----------------------|-------------------------------------------|-------|--------|
| Inaccurate profiling | Human review available; opt-out mechanism | PM | Done |
## 5. Residual Risk
If High residual risk: consult supervisory authority before proceeding (GDPR Art. 36).
Store DPIAs in docs/dpia/ alongside your other compliance documentation and review them when the processing purpose or scope changes.
ID: data-protection.compliance-documentation.dpia-for-high-risk
Severity: low
What to look for: Enumerate every relevant item. Identify whether the application performs high-risk data processing. GDPR Article 35 triggers a mandatory DPIA for: (1) systematic profiling of users with legal or significant effects, (2) large-scale processing of special category data (health, religion, biometrics, sexual orientation), (3) systematic monitoring of publicly accessible areas, (4) automated decision-making that produces legal effects. Common in AI-built apps: recommendation engines, behavioral scoring, credit scoring, user segmentation for pricing. Look for a DPIA document in project documentation. If high-risk processing exists, check whether risks and mitigations are identified.
Pass criteria: At least 1 of the following conditions is met. If high-risk processing is present, a Data Protection Impact Assessment exists that identifies the processing activity, the necessity and proportionality of the processing, the specific risks to data subjects, and the mitigation measures in place (or planned) for each risk.
Fail criteria: High-risk processing is identified but no DPIA exists. DPIA exists but is a template with no actual content specific to the application.
Skip (N/A) when: Application does not perform high-risk processing: no profiling, no automated decision-making with legal effects, no special category data, no large-scale surveillance.
Detail on fail: Example: "Application performs behavioral profiling to adjust pricing tiers (high-risk per GDPR Art. 35) but no DPIA documented." or "Application processes health data (special category) at scale with no DPIA.".
Remediation: Conduct a DPIA for each high-risk processing activity. The structure is prescribed by GDPR:
# DPIA: [Processing Activity Name] — e.g., "User Behavioral Profiling for Feature Recommendations"
## 1. Description of Processing
What data is processed, for what purpose, by what means, for how long.
## 2. Necessity and Proportionality
Why is this processing necessary? Is there a less privacy-invasive alternative?
## 3. Risks to Data Subjects
| Risk | Likelihood | Impact | Risk Level |
|-----------------------------------|------------|--------|------------|
| Inaccurate profiling leads to | Medium | High | High |
| discrimination in service access | | | |
| Data breach exposes behavior data | Low | High | Medium |
## 4. Mitigation Measures
| Risk | Mitigation | Owner | Status |
|---------------------------|-----------------------------------------------|----------|---------|
| Inaccurate profiling | Human review available; opt-out mechanism | Product | Done |
| Behavior data breach | Encryption at rest, pseudonymous IDs in logs | Eng | Done |
## 5. Residual Risk and DPO Consultation
Residual risk level after mitigations: [Low/Medium/High]
If High: consult supervisory authority before proceeding.