On April 30, 2024, the UK Medicines and Healthcare products Regulatory Agency (“MHRA”) outlined its strategic approach (“Approach”) to artificial intelligence (“AI”).  The Approach is a response to the UK Government’s white paper: a pro-innovation approach to AI regulation and subsequent Secretary of State letter of 1 February 2024, and is the culmination of 12 months’ work by the MHRA to ensure the risks of AI are appropriately balanced with the potential transformative impact of AI in healthcare.

AI in Healthcare

AI has the potential to revolutionize the healthcare sector and improve health outcomes at every stage of healthcare provision – from preventative care through to diagnosis and treatment.  AI can help in research and development by strengthening outcomes of clinical trials, as well as being used to improve the clinical care of patients by personalizing care, improving diagnosis and treatment, enhancing the delivery of care and health system efficiency, and supplementing healthcare professionals’ knowledge, skills and competencies. 

MHRA Strategic Approach

The Approach addresses MHRA’s views of the risks and opportunities of AI from three difference perspectives: (i) as a regulator of AI products (noting, in MHRA’s words, that “where AI is used for a medical purpose, it is very likely to come within the definition of a general medical device”); (ii) as a public service organization delivering time-critical decisions; and (iii) as an organization that makes evidence-based decisions that impact on public and patient safety, where that evidence is often supplied by third parties.

The MHRA has set out five key strategic principles as part of its Approach.  In its role as a regulator under (i) above, these principles are largely focused on medical devices with limited discussion of the medicines framework.  The MHRA does discuss use of AI in the medicines lifecycle under (ii) but primarily in the context of improving the quality of applications for medicines licences (for example through use of AI for completeness and consistency checks), for protecting consumers from fraudulent medical products and for enhancing vigilance.  

For the purposes of this blog, we have focused on part (i) of the Approach:

1. Safety, Security and Robustness

AI systems should be robust, secure and function in a safe way throughout the AI lifecycle.  Risks should be continually identified, assessed and managed. The MHRA is expected to take a proportionate approach.  Related guidance will follow, including new guidance on cyber security expected to be published in spring 2025.

2. Appropriate Transparency and Explainability

AI systems should be appropriately transparent and explainable.  Manufacturers should account for the intended user when designing AI systems and must provide a clear statement of the purpose of devices incorporating AI.  For AI as a Medical Device (“AIaMD”) a key risk is the human/device interface and the MHRA is expected to provide further guidance specifically for AIaMD products in spring 2025.

3. Fairness

AI systems should not unfairly discriminate against individuals, create unfair market outcomes, nor undermine the legal rights of individuals or organizations.  The MHRA supports an internationally aligned position and manufacturers should refer to ISO/IEC TR 24027:2021: Information technology, Artificial intelligence (AI), Bias in AI systems and AI aided decision making, IMDRF guidance document N65 and the STANDING Together recommendations when developing AI products.

4. Accountability and Governance

Clear lines of accountability should be established across the AI life cycle and governance measures should be implemented to ensure effective oversight of the supply and use of AI.  For manufacturers of AIaMD, the MHRA has recently published, in collaboration with the US Food and Drug Administration (“FDA”) and Health Canada, guidance on the principles of Predetermined Change Control Plans (“PCCP”) to enable full traceability and accountability of manufacturers for how AI models meet intended use as well as the impact of changes.  The MHRA intends to introduce PCCPs in future core regulations, initially on a non-mandatory basis.

5. Contestability and Redress

Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.  The MHRA aims to strengthen its Yellow Card scheme (for the reporting of adverse events) as well as the obligation on manufacturers of medical devices, including AIaMD, to report incidents to the MHRA.

Going forward, we expect to see more national and international guidance being published on the use of AI in medical products, and the MHRA is currently in the process of implementing its own regulatory reform programme related to AI-driven medical devices to include risk proportionate regulation of AIaMD. Interested parties should therefore be aware of the ever-changing landscape in relation to AI regulation, and in particular should be aware of increasing transparency and accountability requirements.

The MHRA’s Approach also follows on from AI-related publications by other European medicines regulators and industry bodies (see our recent blog post on EFPIA’s position here, as well as our commentary on the EMA’s draft AI reflection paper, here, which was published last year).

If you have any queries concerning the material discussed in this blog or AI more broadly, please reach out to a member of our team.

Photo of Sarah Cowlishaw Sarah Cowlishaw

Advising clients on a broad range of life sciences matters, Sarah Cowlishaw supports innovative pharmaceutical, biotech, medical device, diagnostic and technology companies on regulatory, compliance, transactional, and legislative matters.

Sarah is a partner in London and Dublin practicing in the areas of EU…

Advising clients on a broad range of life sciences matters, Sarah Cowlishaw supports innovative pharmaceutical, biotech, medical device, diagnostic and technology companies on regulatory, compliance, transactional, and legislative matters.

Sarah is a partner in London and Dublin practicing in the areas of EU, UK and Irish life sciences law. She has particular expertise in medical devices and diagnostics, and on advising on legal issues presented by digital health technologies, helping companies navigate regulatory frameworks while balancing challenges presented by the pace of technological change over legislative developments.

Sarah is a co-chair of Covington’s multidisciplinary Digital Health Initiative, which brings together the firm’s considerable resources across the broad array of legal, regulatory, commercial, and policy issues relating to the development and exploitation of digital health products and services.

Sarah regularly advises on:

  • obligations under the EU Medical Devices Regulation and In Vitro Diagnostics Medical Devices Regulation, including associated transition issues, and UK-specific considerations caused by Brexit;
  • medical device CE and UKCA marking, quality systems, device vigilance and rules governing clinical investigations and performance evaluations of medical devices and in vitro diagnostics;
  • borderline classification determinations for software medical devices;
  • legal issues presented by digital health technologies including artificial intelligence;
  • general regulatory matters for the pharma and device industry, including borderline determinations, adverse event and other reporting obligations, manufacturing controls, and labeling and promotion;
  • the full range of agreements that span the product life-cycle in the life sciences sector, including collaborations and other strategic agreements, clinical trial agreements, and manufacturing and supply agreements; and
  • regulatory and commercial due diligence for life sciences transactions.

Sarah has been recognized as one of the UK’s Rising Stars by Law.com (2021), which lists 25 up and coming female lawyers in the UK. She was named among the Hot 100 by The Lawyer (2020) and was included in the 50 Movers & Shakers in BioBusiness 2019 for advancing legal thinking for digital health.

Sarah is also Graduate Recruitment Partner for Covington’s London office.

Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Kristof Van Quathem Kristof Van Quathem

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty…

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty years and developed particular experience in the life science and information technology sectors. He counsels clients on government affairs strategies concerning EU lawmaking and their compliance with applicable regulatory frameworks, and has represented clients in non-contentious and contentious matters before data protection authorities, national courts and the Court of the Justice of the EU.

Kristof is admitted to practice in Belgium.

Tamzin Bond

Tamzin Bond is a Trainee Solicitor who attended BPP School of Law. Prior to joining the firm, Tamzin completed her Ph.D in Chemistry from Imperial College London.