On February 18, 2026, the European Data Protection Board (“EDPB”) published its Report on Stakeholder Event on Anonymisation and Pseudonymisation of 12 December 2025 (the “Report”). The Report summarises feedback from a remote stakeholder event convened to inform the EDPB’s ongoing work on Guidelines 01/2025 on Pseudonymisation (version for public consultation available here) and forthcoming guidance on anonymisation. The event gathered input from 115 participants spanning industry, NGOs, academia, law firms, and public sector bodies.
The objective of the Report is to capture stakeholder insights on how the General Data Protection Regulation (“GDPR”) applies to anonymisation and pseudonymisation, particularly following the Court of Justice of the European Union’s (“CJEU”) judgment in EDPS v SRB (C‑413/23 P). (See our previous blog post here.)
Although the Report does not provide definitive guidance, it identifies areas where stakeholders believe further clarification is needed. Some interesting discussion points included:
1. How to conduct contextual, actor‑specific identifiability assessments.
According to the Report, participants disagreed on how identifiability should be assessed from the perspective of different actors (e.g., controllers, processors, joint controllers, different organizational units in intra-group/company data sharing) and asked for clearer methodological direction. Some reportedly argued for assessing identifiability solely from the processor’s perspective, suggesting data could be considered “anonymous” if the controller applies effective pseudonymisation and confirms that the data cannot be re-identified by the processor. Others maintained that processors process data on behalf of the controller, and therefore the controller’s perspective must always apply. Many participants also asked for clarity on key issues such as whether data processing agreements are required if the receiving party has no means reasonably likely to be used to identify the data subjects, and whether contractual obligations prohibiting re-identification may be enough to solve for the problem of having limited visibility into processors’ identification methods.
Participants also requested guidance on how to approach complex or high‑risk data types in identifiability assessments and, in particular, called for sector‑specific guidance for research and health data (including clinical trials) and online advertising. In the online advertising context, the Report noted that debate persisted over whether online identifiers constitute “personal data”—with some asking the EDPB to reconsider the “singling out” criterion and others insisting that identifiers enabling targeted action on individuals are inherently personal data.
2. Specific GDPR provisions causing difficulties.
The Report states that participants highlighted that questions around identifiability intersect with a range of GDPR provisions, including Articles 6 (legal bases for processing), 28 (controller–processor obligations), 32–34 (security of processing and breach notification), Chapter III (data subject rights), Chapter V (rules on international transfers), and Article 11 (processing where identification is not required). Some participants suggested revisiting the EDPB’s position on Article 11 as expressed in its pseudonymisation guidelines (available here), which they view as overly restrictive. Article 11(2) relieves controllers of certain data subject rights obligations where the controller can demonstrate that it is not in a position to identify the data subject. According to the Report, participants also debated whether a separate legal basis under Article 6 is needed for the transmission of pseudonymised data or the act of anonymisation itself.
3. How onward transmissions may affect identifiability.
Stakeholders raised questions about how actual or foreseeable onward disclosures can change identification risks and the implications for initial controllers. According to the Report, many participants agreed that “potential transmissions” should refer to actual or foreseeable transmissions, not purely theoretical ones. Stakeholders agreed on a need for greater legal certainty but had diverging views on how to achieve it – e.g., through a toolbox, a clear list of criteria for identifiability assessment, a list of factors and measures, or a principles-based assessment. The Report noted that the burden of proof issue attracted significant debate: some noted that transmitting parties cannot be expected to know the full capabilities of every entity in the receiving chain, while others emphasised the controller’s accountability obligations under the GDPR.
4. Assessing “means reasonably likely to be used” (“MRLTBU”) to identify individuals.
Stakeholders sought guidance on how to evaluate MRLTBU, identifying three complementary categories of measures: legal (including contractual), organisational, and technical. The Report notes that several participants drew attention to helpful guidance contained in Recital 26 GDPR. Some participants also stressed that the assessment should factor in, for example, auxiliary datasets, feasibility, cost, and technological capability. Views diverged on whether purely hypothetical threats—including illegal acts such as hacking—should be included in the assessment. According to the Report, stakeholders also asked for more detailed guidance on the use of technical measures, such as concrete examples of privacy enhancing technologies (“PETs”) that can be relied upon. Participants also requested guidance on appropriate measures for key management (e.g., for encryption or tokenisation) and resistance to quantum computing.
5. When pseudonymised data should be considered personal data for a given recipient.
Stakeholders highlighted difficulties in identifying all potential recipients and understanding their re‑identification capabilities, particularly in multi‑layered environments such as research consortia or online advertising. According to the Report, several participants emphasised that a controller’s lack of knowledge about recipient capabilities should not serve as a basis for overlooking risk (i.e., data should be presumed personal for the recipient unless anonymity is proven and documented). Participants also discussed complexities related to the data itself, noting that certain types of data—such as genetic or highly technical datasets—may carry different identifiability risks depending on the auxiliary information available to each recipient. In addition, participants expressed varying views on which technical and organisational measures a pseudonymising controller can implement to reduce re-identification risk. Stakeholders noted that contractual obligations are useful but may be insufficient where parties have different technical capabilities or levels of influence. The importance of robust audit frameworks, including third-party audits and regular re-evaluation every 2-3 years, was also emphasized.
Next Steps
The Report outlines the complexities of anonymisation and pseudonymisation under the GDPR and relevant case law. Although the Report does not provide definitive guidance, it identifies the issues that are likely to be addressed in forthcoming EDPB guidance. Both sets of guidelines—on anonymisation and pseudonymisation— are listed in the EDPB’s Work Programme 2026-2027, adopted February 11, 2026. (Note, however, that both guidelines were listed in the EDPB’s Work Programmes in previous years as well.) During the event, a representative of the Italian Garante also invited stakeholders to come forward with technical examples of what effective pseudonymisation looks like, so there may be additional guidance on that front in the future.
Notably, the definition of “personal data” under the GDPR continues to be debated as part of the European Commission’s digital omnibus proposal—so the discussion will continue (and amplify) in the coming months.