On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organisations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.

The Interim Report summarizes the results of recent engagements with public and industry stakeholders to obtain views on how best to explain AI decision-making, which in turn will inform the ICO’s development of guidance on this issue. The research was carried out by using a ‘citizen’s jury’ method to find out public perception on the issues and holding roundtables with industry stakeholders represented by data scientists, researchers, Chief Data Officers, C-suite executives, Data Protection Officers, lawyers and consultants.

Following the results of the research, the Interim Report provides three key findings:

  1. the importance of context in providing the right type of explanations for AI;
  2. the need for greater education and awareness of AI systems; and
  3. the challenges to providing explanations (such as cost, commercial sensitivities, and lack of internal accountability within organisations).

In relation to context, the Institute’s public engagement with members of the public found that the type and usefulness of AI explanations was highly context-dependent. For instance, interviews with members of the public found that most jurors felt it was less important to receive an explanation of the AI system in the healthcare sector, but that such explanations were more important when AI is used to make decisions about recruitment and criminal justice. Participants also felt that the importance of an explanation of an AI decision is also likely to vary depending on the person it is given to. For instance, in a healthcare setting, it may be more important for a healthcare professional to receive an explanation of a decision than the patient. Some participants also expressed the view that in some situations (such as in the healthcare or criminal justice scenarios), explanations of AI decisions may be too complex, or delivered at a time when individuals would not understand the rationale.

Industry stakeholders presented similar but nuanced views, highlighting that using explanations to identify and address underlying system bias was a key consideration. While some industry stakeholders agreed with the jurors that explanations of AI decisions should be context-specific and reflect the way in which human decision-makers provide explanations, others argued that AI decisions should be held to higher standards. Besides the risk that such explanations of AI may be too complex, industry stakeholders also identified several additional risks with AI explanations that are too detailed, such as the risks of potential disclosure of commercially sensitive material or allowing the system to be gamed. The Interim Report provides a list of contextual factors that the research found may be relevant when considering the importance, purpose and explanations of AI decision-making (see p.23).

In terms of next steps, the ICO plans to publish a first draft of its guidance over the summer, which will be subject to public consultation. Following the consultation, the ICO plans to publish the final guidance later in the autumn. The Interim Report concluded three possible implications for the development of the guidance:

  1. there is no one-size-fits-all approach for explaining AI decisions;
  2. the need for board-level buy-in on explaining AI decisions; and
  3. the value in a standardised approach to internal accountability to help assign responsibility for explainable AI decision-systems.

The Interim Report provides a taster of what’s to come by providing the current planned format and content for the guidance, which focuses on three key principles: (i) transparency; (ii) context; and (iii) accountability. It will also provide guidance on organisational controls (such as roles, policies, procedures, and documentation), technical controls (such as on data collection, model selection and explanation extraction), and on delivery of explanations. The ICO will also finalise its AI Auditing Framework in 2020, which will also address the data protection risks arising from AI systems.

Photo of Mark Young Mark Young

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the…

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the firm. In these contexts, he has worked closely with some of the world’s leading technology and life sciences companies and other multinationals.

Mark has been recognized for several years in Chambers UK as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” “provides thoughtful, strategic guidance and is a pleasure to work with;” and has “great insight into the regulators.” According to the most recent edition (2024), “He’s extremely technologically sophisticated and advises on true issues of first impression, particularly in the field of AI.”

Drawing on over 15 years of experience, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology, e.g., AI, biometric data, and connected devices.
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
  • Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • Counseling ad networks (demand and supply side), retailers, and other adtech companies on data privacy compliance relating to programmatic advertising, and providing strategic advice on complaints and claims in a range of jurisdictions.
  • Advising life sciences companies on industry-specific data privacy issues, including:
    • clinical trials and pharmacovigilance;
    • digital health products and services; and
    • engagement with healthcare professionals and marketing programs.
  • International conflict of law issues relating to white collar investigations and data privacy compliance (collecting data from employees and others, international transfers, etc.).
  • Advising various clients on the EU NIS2 Directive and UK NIS regulations and other cybersecurity-related regulations, particularly (i) cloud computing service providers, online marketplaces, social media networks, and other digital infrastructure and service providers, and (ii) medical device and pharma companies, and other manufacturers.
  • Helping a broad range of organizations prepare for and respond to cybersecurity incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, supply chain incidents, and state-sponsored attacks. Mark’s incident response expertise includes:
    • supervising technical investigations and providing updates to company boards and leaders;
    • advising on PR and related legal risks following an incident;
    • engaging with law enforcement and government agencies; and
    • advising on notification obligations and other legal risks, and representing clients before regulators around the world.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of UK and EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.