On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.

In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023.  After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers.  The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).

The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems.  They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.

The following is a summary of the Principles:

The Principles

The G7 has developed eleven draft guiding principles, which are non-exhaustive and subject to stakeholder consultation:

  • Safety Measures:  Take appropriate measures, including prior to and throughout the deployment and placement on the market of AI systems, to identify and mitigate perceived risks across the AI lifecycle.  Such measures should include testing and mitigation techniques, including traceability in relation to datasets, processes and decisions made during system development;
  • Vulnerabilities:  Identify and mitigate vulnerabilities relating to AI systems, including by facilitating third-party and user discovery and reporting of issues after deployment;
  • Transparency Reports:  Publicly report meaningful information detailing an AI system’s capabilities, limitations and domains of appropriate and inappropriate use;
  • Information Sharing:  Share information on security and safety risks among organizations developing advanced AI systems, including with industry, governments, civil society, and academia;
  • Risk Management Policies:  Develop, implement, and disclose AI governance and risk management policies, grounded in a risk-based approach; this includes disclosing, where appropriate, privacy policies and mitigation measures, including for personal data, user prompts and advanced AI system outputs;
  • Security Controls:  Invest in and implement robust security controls, which may include securing model weights and algorithms, servers, operational security measures for information security, and cyber / physical access controls;
  • Content Authentication And Provenance:  Develop and deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content;
  • Research:  Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures, including research that supports the advancement of AI safety, security, and addressing key risks;
  • Technical Standards:  Advance development of international technical standards and best practices, including for watermarking; and
  • Safeguards:  Implement appropriate data input controls and audit, including by committing to implement appropriate safeguards throughout the AI lifecycle, particularly before and throughout training, on the use of: personal data, data protected by intellectual property, and other data which could result in potentially harmful model capabilities.

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Will Capstick

Will Capstick is a Trainee who attended BPP Law School.

Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”