On October 12, 2023 the Italian Data Protection Authority (“Garante”) published guidance on the use of AI in healthcare services (“Guidance”).  The document builds on principles enshrined in the GPDR, national and EU case-law.  Although the Guidance focuses on Italian national healthcare services, it offers considerations relevant to the use of AI in the healthcare space more broadly.

We provide below an overview of key takeaways.

Lawfulness of processing

The “substantial public interest” derogation for the processing of health data (Article 9(2)(g) of GDPR) must be grounded in EU or in specific provisions of national law.  Moreover, when relying on that ground, profiling and automated decision making may only take place if expressly provided by law.  

Accountability, definition of roles and privacy by design and by default

The Garante stresses the importance of the principles of privacy by design and by default, connected with accountability.  Controllers should carefully consider the design of systems and appropriate data protection safeguards throughout the entire AI cycle.  Additionally, the roles of each stakeholder involved should be determined appropriately.

Data protection impact assessment (“DPIA”)

The Garante unequivocally states that processing of health data to carry out health services at the national level through the use of AI, resulting in a systematic and large-scale processing, qualifies as “high risk”, and therefore requires conducting a DPIA.  Among other things, the DPIA should take into account specific risks, such as discrimination, linked to the use of algorithms to identify trends and draw conclusions from certain datasets, and to take automated decisions based on profiling.  The DPIA should also carefully outline the role of human intervention in those decision-making processes.

Key principles for performing public interest tasks through AI tools and algorithms

The Garante recalls the application of three key principles, established by recent national case law, when processing personal data by means of AI tools and algorithms in the public interest, namely:

  • Transparency: data subjects have a right to know about the existence of decision-making based on automated processing, and to be informed about the logic involved;
  • Human intervention: human intervention capable of controlling, confirming, or refuting an automated decision should be guaranteed; and
  • Non-discrimination: controllers should ensure that they use reliable AI systems, and implement appropriate measures to reduce opaqueness and errors, and periodically review the systems’ effectiveness, given the potential discriminatory effects that processing of health data may yield.  

Quality, integrity and confidentiality of data

Ensuring the accuracy and quality of data processed is paramount in this context, not least to ensure adequate and safe therapeutic assistance.  Controllers should therefore evaluate carefully the underlying risks and take appropriate measures to address them.

Moreover, the authority highlights the risks connected with potential biases produced in the development and use of the analyses, and/or the volume of data used, which may result in negative impact on, or discriminatory effects for individuals.  Controllers should mitigate risks by taking the following measures: (1) clarify the algorithmic logic used by the AI to generate data and services; (2) keep a record of checks performed to avoid biases and of the implemented measures; and (3) monitor risks.

Transparency and fairness

To ensure transparency and fairness in automated decision-making processes, and in the particular context of national healthcare services, the Garante recommends implementing the following measures:

  • ensure clarity, predictability and transparency of the legal basis, including by conducting dedicated information campaigns and ensure effective methods for data subjects to exercise their rights;
  • consult stakeholders and data subjects in the context of conducting a DPIA, and publish at least an excerpt of the DPIA;
  • inform data subjects in clear, concise and comprehensible terms, not only with regards to the elements prescribed by Articles 13 and 14 of GDPR, but also about (i) whether the processing is performed in the algorithm’s training phase, or in its subsequent application, and describing the logic and characteristics of the processing; (ii) whether any obligations and responsibilities are imposed on healthcare professionals using healthcare systems based on AI; and (iii) the advantages, with regards to diagnostics and therapy, resulting from the use of such technology;
  • when used for therapeutic purposes, ensure that data processing based on AI is only executed on the basis of an express request by the healthcare professional, and not automatically; and
  • regulate the healthcare practitioner’s professional responsibility.

Human supervision

The Garante highlights the potential risks for individuals’ rights and freedoms of exclusively automated decision-making, and endorses effective human intervention, through highly skilled supervision.  The authority recommends ensuring a central role for human supervision in the training phase of the algorithm, and in particular, of the healthcare professional.

Principles relating to human dignity and personal identity

The Guidance concludes with some general considerations on the role of ethics in the future development of AI systems in the health space, in order to safeguard human dignity and personal identity, especially with regards to vulnerable subjects.  The Garante recommends to carefully select and engage reliable suppliers of AI services, by verifying preliminarily documentation, such as an AI impact assessment (for more information on AI impact assessments, see our previous blog post here).

***

Covington’s Data Privacy and Cybersecurity Team regularly advises clients on the laws surrounding AI and continues to monitor developments in the field of AI.  If you have any questions about AI in the healthcare space, our team and Covington’s Life Sciences Team would be happy to assist.

Photo of Kristof Van Quathem Kristof Van Quathem

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty…

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty years and developed particular experience in the life science and information technology sectors. He counsels clients on government affairs strategies concerning EU lawmaking and their compliance with applicable regulatory frameworks, and has represented clients in non-contentious and contentious matters before data protection authorities, national courts and the Court of the Justice of the EU.

Kristof is admitted to practice in Belgium.

Photo of Laura Somaini Laura Somaini

Laura Somaini is an associate in the Data Privacy and Cybersecurity Practice Group.

Laura advises clients on EU data protection, e-privacy and technology law, including on Italian requirements. She regularly assists clients in relation to GDPR compliance, international data transfers, direct marketing rules…

Laura Somaini is an associate in the Data Privacy and Cybersecurity Practice Group.

Laura advises clients on EU data protection, e-privacy and technology law, including on Italian requirements. She regularly assists clients in relation to GDPR compliance, international data transfers, direct marketing rules as well as data protection contracts and policies.

Photo of Max Jerman Max Jerman

Max Jerman is an associate in the Life Sciences Practice group. Max advises clients across a wide range of regulatory and compliance issues in the pharmaceutical, food, and cosmetics sectors, with a focus on EU and Italian regulatory advice. He is a native…

Max Jerman is an associate in the Life Sciences Practice group. Max advises clients across a wide range of regulatory and compliance issues in the pharmaceutical, food, and cosmetics sectors, with a focus on EU and Italian regulatory advice. He is a native Italian and Slovenian speaker.