GDPR Art. 35 mandates a Data Protection Impact Assessment before carrying out high-risk processing — systematic profiling with significant effects, large-scale processing of special category data (health, biometrics, racial origin), automated decision-making with legal consequences, or large-scale monitoring. A recommendation engine, credit scoring algorithm, or health data platform that launches without a DPIA is not just non-compliant — it may be subject to a mandatory consultation with the supervisory authority under Art. 36 before processing can begin. Most SaaS products do not trigger Art. 35, but failing to document that assessment explicitly leaves you unable to demonstrate the exemption.
Info because most standard SaaS products do not trigger Art. 35 thresholds — the finding is about documenting the assessment and completing the DPIA when triggers do apply, not an immediate data exposure risk.
Assess whether your application triggers any Art. 35 threshold, document the conclusion, and complete a DPIA if required. A concise DPIA stored in docs/dpia.md is legally valid.
# DPIA Assessment (Art. 35 GDPR)
Date: 2026-02-22
- Systematic profiling with significant effects: No
- Large-scale special category data: No
- Systematic monitoring of public spaces: No
- Automated decisions with legal/financial effects: No
Conclusion: No Art. 35 trigger. DPIA not mandatory.
If a trigger applies, document the processing description, necessity and proportionality assessment, risks to data subjects, and mitigations. Reference the ICO's DPIA template (ico.org.uk) or the EDPB guidelines on DPIAs (wp248rev.01) as starting points. Store the completed DPIA in docs/dpia-[processing-activity].md and review it whenever the processing materially changes.
ID: gdpr-readiness.data-processing.dpia-high-risk
Severity: info
What to look for: Identify whether the application performs any processing that triggers a mandatory DPIA under GDPR Article 35. High-risk processing categories include: (1) systematic profiling with legal or similarly significant effects, (2) large-scale processing of special category data (health, biometrics, religion, sexual orientation, racial/ethnic origin), (3) systematic monitoring of publicly accessible areas, (4) automated decision-making with legal effects, (5) large-scale processing of children's data, (6) novel uses of technology. Look for a DPIA document in docs/, SECURITY.md, DATA_PROTECTION.md, or linked from the README. If high-risk processing is identified in the codebase (recommendation engines, behavioral scoring, health data processing, credit assessments), verify a DPIA exists and is specific to the application — not a generic template. Count all instances found and enumerate each.
Pass criteria: If high-risk processing is present, a DPIA document exists that identifies the specific processing activity, its necessity and proportionality, the risks to data subjects, and concrete mitigation measures. If no high-risk processing is present, this is documented (e.g., "DPIA assessment: no Art. 35 triggers identified"). At least 1 implementation must be confirmed.
Fail criteria: High-risk processing is identified in the codebase but no DPIA exists. A DPIA document is present but is a blank template with no application-specific content.
Skip (N/A) when: Application clearly does not perform high-risk processing (no profiling, no special category data, no automated decisions with legal effects, no large-scale monitoring) and this is documented.
Detail on fail: Example: "Application includes a credit scoring algorithm (automated decision-making with financial effects) but no DPIA found." or "Application processes health data at scale with no DPIA documented.".
Remediation: Conduct a DPIA if any Art. 35 triggers apply. A concise DPIA is valid:
# DPIA: [Processing Activity] — e.g., Behavioral Recommendation Engine
## 1. Description
We analyze user interaction patterns (pages viewed, time on feature, actions taken)
to generate personalized feature recommendations. Data: pseudonymous user ID, event
sequences. No special category data. Volume: up to 50,000 users.
## 2. Necessity and Proportionality
Recommendations improve onboarding success by ~30% (measured). Less privacy-invasive
alternatives (manual curation) insufficient for scale. Processing is proportionate.
## 3. Risks to Data Subjects
| Risk | Likelihood | Severity | Overall |
|-------------------------------|------------|----------|---------|
| Behavior data breach | Low | Medium | Low |
| Discriminatory pattern-matching | Low | High | Medium |
## 4. Mitigations
| Risk | Mitigation | Status |
|-------------------------------|---------------------------------------------|---------|
| Behavior data breach | Pseudonymous IDs; encryption at rest | Done |
| Discriminatory recommendations| No demographic inputs used in model | Done |
## 5. Residual Risk: Low — no supervisory authority consultation required.