On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).

Public-sector organisations are increasingly exploring ways of using AI to deliver better public services. The Committee found that, in the UK, healthcare and policing currently have the most developed AI programmes, having benefitted from sector-specific codes (such as the Code of Conduct for Data-Driven Health and Care Technology in the healthcare context, available here). The Report contends that implementing clear ethical standards around AI will accelerate adoption of innovative technologies by building user trust in them.

The Report sets out recommendations to the UK Government in its roles as policymaker and regulator, alongside recommendations to providers, both public and private, of public services. Although many of these recommendations will be primarily relevant to public officials, several could impact companies that either supply AI products or services to public-sector customers in the UK, or that provide public-sector services using AI on behalf of UK public-sector organisations.

Recommendations to the UK Government

Of the Report’s recommendations to UK policymakers and regulators, the following are likely to be of particular interest to companies providing AI technologies to UK public-sector customers:

  1. Procurement Rules and Processes. The Government should leverage its market purchasing power to ensure that private companies develop public-sector AI solutions that meet appropriate public standards.
  2. Crown Commercial Service’s Digital Marketplace. The Crown Commercial Service should introduce practical tools to assist public bodies and those delivering public-facing services to easily identify AI products and services that meet their ethical requirements.
  3. Impact Assessment. The Government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards; such assessments should be mandatory and should be published. If adopted, this recommendation could lead UK public-sector customers to require documentation from suppliers that enables them to conduct such assessments.
  4. Transparency and Disclosure. The Government should establish guidelines for public bodies about the declaration and disclosure of AI systems. To the extent this includes disclosures about the design or operation of the system, this too could lead public-sector customers to impose additional documentation demands on suppliers of AI technologies.

Recommendations to providers, both public and private, of public services.

The Report also offers seven recommendations to front-line providers, both public and private, of public services:

  1. Evaluating Risks to Public Standards. The potential impact of a proposed AI system on public standards should be assessed at a project design stage, ensuring that the design of the system mitigates any identified risks to these standards. This review should be renewed upon any substantial change to the design of an AI system.
  2. Diversity. Issues of bias and discrimination should be tackled by ensuring account is taken of a diverse range of behaviours, backgrounds and points of view.
  3. Upholding Responsibility. Responsibility for AI systems should be clearly allocated and documented, so that operators for AI systems are able to exercise their responsibility in a meaningful way.
  4. Monitoring and Evaluation. Providers of public services should continually monitor and evaluate AI systems to ensure they always operate as intended.
  5. Establishing Oversight. Oversight mechanisms should be established that allow for AI systems to be properly scrutinized.
  6. Appeal and Redress. Citizens should always be informed of their right and method to appeal against automated an AI-assisted decisions.
  7. Training and Education. Employees working with AI systems should undergo continuous training and education.

Additional message

The Report concludes that the UK’s regulatory and governance framework for AI in the public sector remains a work in progress. Although the Committee commends the work being done by the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation (CDEI), and the Information Commissioner’s Office, it notes that there is an urgent need for guidance and regulation on the issues of transparency and data bias in particular. The Committee considers that the UK does not require a specific AI regulator, but that existing relevant regulators should be assisted by the CDEI, acting as a “regulatory assurance” body. The Committee emphasises that public-sector bodies must comply with existing UK law on data usage and should implement clear, risk-based governance frameworks for their use of AI. We can expect that public bodies, such as the NHS, will seek to flow down these requirements to companies providing AI technologies.

The use of AI in the public sector is an issue of increasing debate and focus, particularly in Europe. The team at Covington will continue to monitor developments in this area.

*The Committee on Standards in Public Life is an advisory non-departmental public body that advises the UK Prime Minister on ethical standards across the whole of public life in England. It monitors and reports on issues relating to the standards of conduct of all public-office holders, and its recommendations inform policy decisions of the UK Government.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.