On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.
In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.
G7 Data Protection and Privacy Authorities
The Statement follows meetings of G7 countries in April and May 2023, where global leaders pledged to advance “international discussions on inclusive artificial intelligence (AI) governance and interoperability” (see our blog here for further details).
The Statement identifies a number of areas that data protection regulators consider generative AI tools may raise risks, including (among other topics):
- Legal authority for the processing of personal information, particularly that of children, including in relation to the datasets used to train, validate and test generative AI models;
- Security safeguards to protect against threats and attacks, such as those that seek to extract or reproduce personal information originally in the datasets used to train an AI model;
- Transparency measures to promote openness and explainability in the operation of generative AI tools, especially in cases where such tools are used to make or assist in decision-making about individuals;
- Accountability measures to ensure appropriate levels of responsibility among actors in the AI supply chain; and
- Limiting the collection of personal data to only that which is necessary to fulfil the specified task.
Regulators also adopted an action plan setting out how they will collaborate over the 12 months until next year’s G7 meeting in Italy. During that period, regulators will hold further discussions on how to address the perceived privacy challenges of generative AI in working groups on emerging technologies and enforcement cooperation. As part of the action plan, the regulators also pledged to increase dialogues and cross-border enforcement cooperation amongst G7 authorities and the broader data protection community.
Following its publication of updated Guidance on AI and data protection in April 2023 (see our blog here for an overview), the ICO set out a list of eight questions that, according to the ICO, businesses developing or using generative AI that processes personal data “need to ask” themselves. In its blog post, the ICO emphasizes that existing data protection law applies to processing personal data that comes from publicly accessible sources, and commits to acting where businesses are “not following the law, and considering the impact on individuals”.
The ICO’s questions cover similar topics as the G7 Statement, including (among others):
- Ensuring transparency – the ICO notes that businesses “must make information about the processing publicly accessible unless an exemption applies. If it does not take disproportionate effort, you must communicate this information directly to the individuals the data relates to.”
- Mitigating security risks – in addition to mitigating the risk of personal data breaches, the ICO states that businesses “should consider and mitigate risks of model inversion and membership inference, data poisoning and other forms of adversarial attacks.”
- Limiting unnecessary processing – the ICO advises that businesses “must collect only the data that is adequate to fulfil your stated purpose. The data should be relevant and limited to what is necessary.”
- Preparing a Data Protection Impact Assessment (DPIA) – the ICO notes that businesses must assess and mitigate any data protection risks posed by generative AI tools via the DPIA process before starting to process personal data.
The ICO’s guidance forms part the UK’s wider approach to AI regulation, which requires existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here). In recent months, UK regulators across a range of sectors have issued statements on generative AI. For example, in June 2023, Ofcom, the UK’s communications regulator, published a guidance note on “What generative AI means for the communications sector”, and, in May 2023, the Competition and Markets Authority (CMA) launched an inquiry into foundation models, including generative AI (see our blog post here for further details).
Covington regularly advises the world’s top technology companies on their most challenging regulatory, compliance, and public policy issues in the EU, UK and other major markets. We are monitoring the EU and UK’s developments very closely and will be updating this site regularly – please watch this space for further updates.