On April 30, 2024, the UK Medicines and Healthcare products Regulatory Agency (“MHRA”) outlined its strategic approach (“Approach”) to artificial intelligence (“AI”). The Approach is a response to the UK Government’s white paper: a pro-innovation approach to AI regulation and subsequent Secretary of State letter of 1 February 2024, and is the culmination of 12 months’ work by the MHRA to ensure the risks of AI are appropriately balanced with the potential transformative impact of AI in healthcare.
AI in Healthcare
AI has the potential to revolutionize the healthcare sector and improve health outcomes at every stage of healthcare provision – from preventative care through to diagnosis and treatment. AI can help in research and development by strengthening outcomes of clinical trials, as well as being used to improve the clinical care of patients by personalizing care, improving diagnosis and treatment, enhancing the delivery of care and health system efficiency, and supplementing healthcare professionals’ knowledge, skills and competencies.
MHRA Strategic Approach
The Approach addresses MHRA’s views of the risks and opportunities of AI from three difference perspectives: (i) as a regulator of AI products (noting, in MHRA’s words, that “where AI is used for a medical purpose, it is very likely to come within the definition of a general medical device”); (ii) as a public service organization delivering time-critical decisions; and (iii) as an organization that makes evidence-based decisions that impact on public and patient safety, where that evidence is often supplied by third parties.
The MHRA has set out five key strategic principles as part of its Approach. In its role as a regulator under (i) above, these principles are largely focused on medical devices with limited discussion of the medicines framework. The MHRA does discuss use of AI in the medicines lifecycle under (ii) but primarily in the context of improving the quality of applications for medicines licences (for example through use of AI for completeness and consistency checks), for protecting consumers from fraudulent medical products and for enhancing vigilance.
For the purposes of this blog, we have focused on part (i) of the Approach:
1. Safety, Security and Robustness
AI systems should be robust, secure and function in a safe way throughout the AI lifecycle. Risks should be continually identified, assessed and managed. The MHRA is expected to take a proportionate approach. Related guidance will follow, including new guidance on cyber security expected to be published in spring 2025.
2. Appropriate Transparency and Explainability
AI systems should be appropriately transparent and explainable. Manufacturers should account for the intended user when designing AI systems and must provide a clear statement of the purpose of devices incorporating AI. For AI as a Medical Device (“AIaMD”) a key risk is the human/device interface and the MHRA is expected to provide further guidance specifically for AIaMD products in spring 2025.
3. Fairness
AI systems should not unfairly discriminate against individuals, create unfair market outcomes, nor undermine the legal rights of individuals or organizations. The MHRA supports an internationally aligned position and manufacturers should refer to ISO/IEC TR 24027:2021: Information technology, Artificial intelligence (AI), Bias in AI systems and AI aided decision making, IMDRF guidance document N65 and the STANDING Together recommendations when developing AI products.
4. Accountability and Governance
Clear lines of accountability should be established across the AI life cycle and governance measures should be implemented to ensure effective oversight of the supply and use of AI. For manufacturers of AIaMD, the MHRA has recently published, in collaboration with the US Food and Drug Administration (“FDA”) and Health Canada, guidance on the principles of Predetermined Change Control Plans (“PCCP”) to enable full traceability and accountability of manufacturers for how AI models meet intended use as well as the impact of changes. The MHRA intends to introduce PCCPs in future core regulations, initially on a non-mandatory basis.
5. Contestability and Redress
Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm. The MHRA aims to strengthen its Yellow Card scheme (for the reporting of adverse events) as well as the obligation on manufacturers of medical devices, including AIaMD, to report incidents to the MHRA.
Going forward, we expect to see more national and international guidance being published on the use of AI in medical products, and the MHRA is currently in the process of implementing its own regulatory reform programme related to AI-driven medical devices to include risk proportionate regulation of AIaMD. Interested parties should therefore be aware of the ever-changing landscape in relation to AI regulation, and in particular should be aware of increasing transparency and accountability requirements.
The MHRA’s Approach also follows on from AI-related publications by other European medicines regulators and industry bodies (see our recent blog post on EFPIA’s position here, as well as our commentary on the EMA’s draft AI reflection paper, here, which was published last year).
If you have any queries concerning the material discussed in this blog or AI more broadly, please reach out to a member of our team.