On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.

The ICO identifies that the following trade-offs may arise in AI projects:

  • Accuracy vs. privacy. Large amounts of data are needed to improve the accuracy of AI systems but this may impact the privacy rights of the individuals involved.
  • Fairness vs. accuracy. Certain factors need to be removed from AI algorithms to ensure that AI systems are fair and do not discriminate individuals on the basis of any protected characteristics (as well as known proxies, such as postcode as a proxy for race).  However, this may impact the accuracy of the AI system.
  • Fairness vs. privacy. In order to test whether an AI system is discriminatory, it needs to be tested using data labelled by protected characteristics, but this may be restricted under privacy law (i.e., under the rules on processing special category personal data).
  • Explainability vs. accuracy. For complex AI systems, it may be difficult to explain the logic of the system in an easy-to-understand way that is also accurate.  The ICO considers, however, that this trade-off between explainability and accuracy is often a false dichotomy.  See our previous blog post on the ICO’s separate report on explaining AI for more on the topic.
  • Explainability vs. security. Providing detailed explanations about the logic of an AI system may lead to inadvertently disclosing information in the process that can be used to infer private information about the individuals whose personal data was used to build the AI system.  The ICO recognizes that this area is under active research, and the full extent of the risks are not yet known.

The ICO recommends that organizations take the following steps in order to manage these trade-offs that may arise:

  1. Identify and assess existing or potential trade-offs;
  2. Consider available technical means to minimize trade-offs;
  3. Have clear criteria and lines of accountability for making trade-off decisions, including a “robust, risk-based and independent approval process”;
  4. Explain trade-offs to data subjects or humans reviewing the AI outputs;
  5. Continue to regularly review trade-offs.

The ICO makes a number of additional recommendations.  For example:

  • Organizations should document decisions to an “auditable standard”, including, where required, by performing a Data Protection Impact Assessment. Such documentation should: (i) consider the risks to individuals’ personal data, (ii) use a methodology to identify and assess trade-offs; (iii) provide a rational for final decisions; and (iv) explain how the decision aligns with the organization’s risk appetite.
  • When outsourcing AI solutions, assessing trade-offs should form part of organizations’ due diligence of third parties. Organizations should ensure they can request that solutions be modified to strike the right balance between the trade-offs identified above.

In the final section of the blog, the ICO offers some worked examples demonstrating mathematical approaches to help organizations visualize and make decisions to balance the trade-offs.  Although elements of trade-offs can be precisely quantified in some cases, the ICO recognizes that not all aspects of privacy and fairness can be fully quantified.  The ICO therefore recommends that such methods should “always be supplemented with a more holistic approach”.

The ICO has published a separate blog post on the use of fully automated decision making AI systems and the right to human intervention under the GDPR.  The ICO provides practical advice for organizations on how to ensure compliance with the GDPR, such as: (i) consider necessary requirements to support a meaningful human review; (ii) provide training for human reviewers; and (iii) support and incentivize staff to escalate concerns raised by data subjects.  For more information, read the ICO’s blog here.

The ICO intends to publish a formal consultation paper on the framework for auditing AI in January 2020, followed by the final AI Auditing Framework in the spring.  In the meantime, the ICO welcomes feedback on its current thinking, and has provided a dedicated email address to obtain views (available at the bottom of the blog).  We will continue to monitor the ICO’s developments in this area and will keep you apprised on this blog.

Photo of Mark Young Mark Young

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the…

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the firm. In these contexts, he has worked closely with some of the world’s leading technology and life sciences companies and other multinationals.

Mark has been recognized for several years in Chambers UK as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” “provides thoughtful, strategic guidance and is a pleasure to work with;” and has “great insight into the regulators.” According to the most recent edition (2024), “He’s extremely technologically sophisticated and advises on true issues of first impression, particularly in the field of AI.”

Drawing on over 15 years of experience, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology, e.g., AI, biometric data, and connected devices.
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
  • Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • Counseling ad networks (demand and supply side), retailers, and other adtech companies on data privacy compliance relating to programmatic advertising, and providing strategic advice on complaints and claims in a range of jurisdictions.
  • Advising life sciences companies on industry-specific data privacy issues, including:
    • clinical trials and pharmacovigilance;
    • digital health products and services; and
    • engagement with healthcare professionals and marketing programs.
  • International conflict of law issues relating to white collar investigations and data privacy compliance (collecting data from employees and others, international transfers, etc.).
  • Advising various clients on the EU NIS2 Directive and UK NIS regulations and other cybersecurity-related regulations, particularly (i) cloud computing service providers, online marketplaces, social media networks, and other digital infrastructure and service providers, and (ii) medical device and pharma companies, and other manufacturers.
  • Helping a broad range of organizations prepare for and respond to cybersecurity incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, supply chain incidents, and state-sponsored attacks. Mark’s incident response expertise includes:
    • supervising technical investigations and providing updates to company boards and leaders;
    • advising on PR and related legal risks following an incident;
    • engaging with law enforcement and government agencies; and
    • advising on notification obligations and other legal risks, and representing clients before regulators around the world.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of UK and EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.