GDPR Security Requirements: The Technical Controls Article 32 Actually Demands
GDPR Article 32 is deliberately non-prescriptive. It requires controllers and processors to implement 'appropriate technical and organizational measures' to ensure a level of security appropriate to the risk — without specifying exactly which measures. This approach gives organizations flexibility but creates ambiguity: what is 'appropriate'? This guide translates Article 32 into specific technical controls, covers the breach detection and notification obligations that follow a security incident, and addresses how to document your security program in a way that satisfies regulators when they come asking.
What Article 32 Actually Requires
Article 32(1) lists four specific technical measures as examples of appropriate security, while making clear the list is not exhaustive:
a) Pseudonymisation and encryption of personal data: Encryption protects data at rest and in transit. Pseudonymization replaces directly identifying data with identifiers, reducing the risk of exposure while retaining data utility for analytics. Both are explicitly mentioned — not as optional enhancements, but as baseline expectations for organizations processing significant volumes of personal data.
b) Ability to ensure ongoing confidentiality, integrity, availability, and resilience of processing systems: This maps to a functioning information security program. Confidentiality = access controls and encryption. Integrity = tamper detection and audit trails. Availability = backup, redundancy, and recovery capability. Resilience = the ability to continue processing after disruption.
c) Ability to restore availability and access to personal data in a timely manner in the event of a physical or technical incident: This is a business continuity and disaster recovery requirement with a data protection lens. Regulators expect tested backup and recovery procedures, documented RTO/RPO targets, and evidence that restoration has been practiced.
d) Process for regularly testing, assessing, and evaluating the effectiveness of technical and organizational measures: This is the requirement that is most often absent. A security program that exists on paper but has never been tested does not meet Article 32. Vulnerability assessments, penetration tests, security audits, and tabletop exercises are all forms of the regular testing Article 32 requires.
Article 32(2) — Risk-Proportionate Implementation: Measures must be appropriate to the risk, taking into account the nature, scope, context, and purposes of processing, and the varying likelihood and severity of risk to the rights and freedoms of natural persons. Higher risk processing (health data, financial data, children's data, large-scale processing) requires more stringent controls.
Encryption Requirements: What 'Appropriate' Means in Practice
Article 32 does not specify encryption algorithms or key lengths. Regulatory guidance and enforcement decisions establish what 'appropriate' encryption looks like.
Data in transit: TLS 1.2 at minimum; TLS 1.3 preferred for new implementations. Disable TLS 1.0 and 1.1 — they have known vulnerabilities and are no longer considered appropriate. HTTPS for all web applications processing personal data. Encrypted email for transmitting sensitive personal data (or secure file transfer alternatives). Verify certificate validity and proper certificate chain configuration.
Data at rest: AES-256 for database and file encryption. Transparent Data Encryption (TDE) for SQL Server, Oracle, and PostgreSQL databases containing personal data. BitLocker (Windows) or FileVault (macOS) for endpoint storage. Encrypted backup storage — a clear backup is a second copy of the breach. Cloud storage encryption using customer-managed keys (CMK) rather than provider-managed keys where data sensitivity warrants.
Key management: Encryption is only as strong as key management. Weak key management is cited in regulatory decisions as inadequate despite the use of strong encryption. Requirements: separate storage of keys and encrypted data, key rotation policies (annually minimum), key access logging, hardware security modules (HSMs) for high-value key storage.
Enforcement context: The Greek DPA fined a municipality after a breach — the personal data was stored in plaintext in a database accessible to employees beyond those with a processing need. The encryption requirement was not met. The UK ICO has cited lack of encryption as an aggravating factor in multiple breach-related enforcement actions.
Briefings like this, every morning before 9am.
Threat intel, active CVEs, and campaign alerts, distilled for practitioners. 50,000+ subscribers. No noise.
Pseudonymization: Practical Implementation Beyond the Definition
Pseudonymization replaces directly identifying data (name, email, national ID number) with an artificial identifier (pseudonym) such that the data can no longer be attributed to a specific individual without the use of additional information kept separately.
Techniques:
Tokenization: Replace a real identifier (credit card number, national ID) with a randomly generated token stored in a secure token vault. The mapping between token and real value is stored separately. Widely used in payment processing. The tokenized data is pseudonymized; the vault containing the mapping is not.
Hashing with salt: Apply a cryptographic hash function with a per-record random salt to identifiers. The hashed value can be used for lookups without exposing the original value. Important: unsalted hashing of common values (email addresses, phone numbers) is reversible via rainbow tables — salting is required.
Encryption with separate key management: Encrypt the identifier field, store the decryption key separately with strict access controls. Functionally similar to tokenization but using symmetric or asymmetric encryption.
Data separation: Store pseudonyms in the main dataset; store the pseudonym-to-identity mapping in a separate system with separate access controls and audit logging.
GDPR benefit of pseudonymization: Pseudonymized data that an attacker cannot re-identify (because the mapping information is not compromised) significantly reduces the severity classification of a breach under Article 34. If a breach involves only pseudonymized data, notification to affected individuals may not be required if the attacker cannot re-identify them. This is a material regulatory benefit, not just a compliance formality.
Breach Detection: The Obligation Before the 72-Hour Clock Starts
Article 33 requires notification to the supervisory authority 'without undue delay and, where feasible, not later than 72 hours after having become aware of' a personal data breach. The 72-hour clock starts from when the controller becomes aware — but regulators interpret 'becomes aware' as when the organization has a reasonable degree of certainty that a breach has occurred, not necessarily when all facts are known.
Detection controls that affect when the clock starts: Organizations with poor detection capability may not become 'aware' of a breach for weeks or months. When discovered late, they face both the breach itself and the regulatory question of why detection took so long. Conversely, organizations with mature detection that identify a breach within hours have time to assess scope before the 72-hour window closes.
Technical controls that support early breach detection:
- SIEM with rules tuned for data exfiltration indicators (large outbound transfers, bulk data access, unusual API call volumes)
- DLP monitoring for personal data leaving approved destinations
- Database activity monitoring (DAM) for unusual query patterns on databases containing personal data
- Privileged access monitoring — insider threats accessing personal data beyond their normal scope
- Network anomaly detection for unexpected egress patterns
- Cloud audit logging (AWS CloudTrail, Azure Activity Log, GCP Audit Logs) with alerting on sensitive data access events
The discovery-to-notification gap: The EDPB guidelines distinguish between an organization that discovers a breach and notifies within 72 hours vs. one that discovers it but internally debates whether it is a 'real' breach before notifying. Delays caused by internal deliberation about whether to notify — rather than genuine investigation to gather facts — are treated as violations of the 72-hour requirement. Begin notification procedures when you have reasonable certainty a breach occurred; update the notification as facts develop.
Breach Notification Requirements: 72 Hours and Beyond
The breach notification framework under GDPR involves two distinct obligations:
Article 33 — Notification to supervisory authority: Within 72 hours of becoming aware of a breach affecting personal data. This applies unless the breach is unlikely to result in a risk to individuals' rights and freedoms (a very narrow exception — document your risk assessment if you conclude notification is not required).
Required notification content:
- Nature of the personal data breach (type of breach, categories of individuals affected, approximate number affected)
- Contact details of the DPO or other contact point
- Likely consequences of the breach
- Measures taken or proposed to address the breach and mitigate its effects
If all information is not available within 72 hours, submit an initial notification with available information and provide additional information in phases. Phased notification is explicitly permitted — failing to notify because the investigation is not complete is not.
Article 34 — Communication to affected individuals: Required when the breach is likely to result in a high risk to individuals. High risk typically means: financial harm (account takeover, identity fraud), physical harm, discrimination, or reputational damage resulting from exposure of sensitive categories of data (health, sexual orientation, criminal records, immigration status).
No specific time limit in Article 34, but communication must occur 'without undue delay.' Practical target: within 72 hours of the Article 33 notification when individual notification is also required.
Internal notification chain: Pre-document the internal escalation path: who is notified when a potential breach is discovered (DPO, Legal, CISO, CEO), who makes the assessment decision, who owns the supervisory authority notification. A decision tree documented in the IR plan prevents the 72-hour window from being consumed by internal governance debates.
Documenting Your Technical and Organizational Measures (TOMs)
When regulators investigate a breach or respond to a complaint, they request the controller's documentation of technical and organizational measures. Organizations that cannot produce this documentation face the inference that adequate measures were not in place.
What the TOMs document should cover:
Organizational measures:
- Data protection governance (DPO role, privacy team structure)
- Security policy framework (information security policy, acceptable use, data classification)
- Staff training and awareness program
- Third-party risk management (vendor assessment process, DPA templates)
- Access control policies and procedures
Technical measures:
- Encryption standards (at rest and in transit, key management)
- Access control and authentication (MFA, RBAC, privileged access management)
- Network security (firewall configuration, segmentation, monitoring)
- Vulnerability management (assessment frequency, patching SLAs)
- Backup and recovery (schedule, testing frequency, RTO/RPO targets)
- Security monitoring (SIEM, logging, retention periods)
- Incident response capability (IR plan, response team, breach notification procedure)
Evidence of regular testing (Article 32(1)(d)):
- Penetration test results and remediation status
- Vulnerability assessment reports
- Security audit findings
- Tabletop exercise records and after-action findings
- DR test results
Practical format: Maintain the TOMs as a living document reviewed annually and updated after material changes to systems or processes. Cross-reference to your records of processing activities (RoPA) under Article 30. During a regulatory investigation, this document is your primary defense — it demonstrates that appropriate measures were in place, not simply that you have them now.
The bottom line
GDPR Article 32 compliance is not a documentation exercise — it requires an operational security program with encryption in place, breach detection capability, tested recovery procedures, and a functional incident response process with a documented breach notification chain. The organizations that receive the largest fines are those where regulators find personal data stored in plaintext, breaches undetected for months, and notification obligations missed because internal escalation paths did not exist. Build the controls first; the documentation of those controls is what you produce when regulators ask.
Frequently asked questions
What does GDPR Article 32 require for security?
Article 32 requires controllers and processors to implement technical and organizational measures appropriate to the risk, including: encryption and pseudonymization of personal data, measures to ensure ongoing confidentiality, integrity, availability, and resilience of processing systems, ability to restore availability and access to personal data after an incident, and a process for regularly testing and evaluating the effectiveness of security measures.
What encryption standards are considered appropriate under GDPR?
GDPR does not specify encryption standards, but regulatory guidance and enforcement precedent establish expectations: TLS 1.2 or 1.3 for data in transit (TLS 1.0/1.1 are inadequate), AES-256 for data at rest including databases and backups, and proper key management with keys stored separately from encrypted data, access logging, and rotation policies. Storing personal data in plaintext databases has been cited as an Article 32 violation in multiple enforcement actions.
What is the 72-hour breach notification requirement under GDPR?
Article 33 requires notification to the relevant supervisory authority (e.g., ICO in the UK, CNIL in France) within 72 hours of the controller becoming aware of a personal data breach. The clock starts when the organization has reasonable certainty a breach occurred — not when investigation is complete. Phased notification is permitted: submit initial notification within 72 hours with available information, then provide additional details as the investigation develops.
When do you need to notify affected individuals under GDPR?
Article 34 requires communication to affected individuals when a breach is likely to result in a high risk to their rights and freedoms. High risk typically means financial harm potential (account takeover, identity fraud), exposure of special categories of data (health, sexual orientation, criminal records), or physical harm risk. The notification must occur without undue delay — practically, within 72 hours of the Article 33 supervisory authority notification when individual notification is also required.
What is pseudonymization under GDPR and does it reduce compliance obligations?
Pseudonymization replaces identifying data with artificial identifiers, with the mapping between pseudonym and identity stored separately under strict access controls. GDPR explicitly recognizes it as a risk-reduction measure. If a breach involves only pseudonymized data and the attacker cannot re-identify individuals, the severity classification is reduced and individual notification under Article 34 may not be required. It also reduces the risk assessment score for Data Protection Impact Assessments.
What documentation should organizations maintain to demonstrate GDPR Article 32 compliance?
Maintain a Technical and Organizational Measures (TOMs) document covering: encryption standards, access control and authentication controls, network security measures, vulnerability management processes, backup and recovery procedures, security monitoring capability, and incident response procedures including breach notification chain. Critically, include evidence of regular testing — penetration test results, vulnerability assessment reports, DR test records, and tabletop exercise after-action findings. This is the document regulators request during investigations.
Sources & references
- GDPR Article 32 — Security of Processing
- EDPB Guidelines 01/2021 on Data Breach Notification
- ICO Guide to the GDPR — Security
- ENISA Pseudonymisation Techniques and Best Practices
Free resources
Critical CVE Reference Card 2025–2026
25 actively exploited vulnerabilities with CVSS scores, exploit status, and patch availability. Print it, pin it, share it with your SOC team.
Ransomware Incident Response Playbook
Step-by-step 24-hour IR checklist covering detection, containment, eradication, and recovery. Built for SOC teams, IR leads, and CISOs.
Get threat intel before your inbox does.
50,000+ security professionals read Decryption Digest for early warnings on zero-days, ransomware, and nation-state campaigns. Free, weekly, no spam.
Unsubscribe anytime. We never sell your data.

Founder & Cybersecurity Evangelist, Decryption Digest
Cybersecurity professional with expertise in threat intelligence, vulnerability research, and enterprise security. Covers zero-days, ransomware, and nation-state operations for 50,000+ security professionals weekly.
The Mythos Brief is free.
AI that finds 27-year-old zero-days. What it means for your security program.
