Last week, the U.S. Food & Drug Administration (“FDA” or the “Agency”) issued a second discussion paper on the use of artificial intelligence (“AI”) and machine learning (“ML”) with respect to drug and biological products, this time focusing on the use of AI/ML in the drug and biologic development process, “Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products” (“Second Discussion Paper”).[1] The Second Discussion Paper was issued by the Center for Drug Evaluation and Research (“CDER”), Center for Biologics Evaluation and Research (“CBER”), and Center for Devices and Radiological Health (“CDRH”), and from a scope perspective, the Second Discussion Paper discusses the use of AI/ML in drug and biologic development, as well as devices intended to be used in combination with drugs or biologics (including, but not limited to, combination products, companion devices, and complementary devices).
In the Second Discussion Paper and associated press release, FDA recognizes the significance of AI/ML in drug[2] development, citing the more than 100 drug and biological product applications—submitted in 2021 alone—that included AI/ML components, and the areas of drug development where AI/ML efforts are already active, including clinical trial design, use of digital health technologies (“DHTs”), and real-world data (“RWD”) analytics. The Second Discussion Paper does not endorse any specific approaches for the use of AI/ML in drug development, but rather seeks feedback from stakeholders that can help inform the Agency’s future regulatory activities.
This client alert provides a high-level overview of the Second Discussion Paper, as well as areas for potential engagement with the Agency on the use of AI/ML in drug development. Comments on the Second Discussion Paper must be submitted to FDA by August 9, 2023.
Current and Potential Uses of AI/ML in Drug Development
In the Second Discussion Paper, FDA highlights the many ways AI/ML is currently or could potentially be used in the drug development process, including:
- Drug Discovery: FDA notes that early drug discovery is one of the areas in which sponsors have significant interest in utilizing AI/ML. In particular, FDA discusses the ways in which AI/ML has or can be used in the drug identification, selection, and prioritization process, as well as in the compound screening and design process.
- Nonclinical Research: FDA discusses the ways in which AI/ML could be leveraged to support nonclinical research. FDA notes, for example, that a recurrent neural network, an ML algorithm, may be used to complement traditional pharmacokinetic / pharmacodynamic models in areas of highly complex data analysis.
- Clinical Research: FDA observes that one of the “most significant applications of AI/ML” is in efforts to streamline and advance clinical research. For instance, FDA discusses AI/ML’s ability to analyze vast amounts of data and the potential to inform the design and efficiency of non-traditional trials, such as decentralized clinical trials. FDA specifically notes AI/ML’s use in a number of areas related to the conduct of clinical research, including recruitment, dose/dosing regimen optimization, adherence, retention, and site selection.
- Clinical Trial Data Collection, Management, and Analysis, and Clinical Endpoint Assessment: FDA discusses the ways in which AI/ML could be used to collect, manage, and analyze clinical trial data, including the potential role of DHTs to enable the use of AI/ML in clinical trials, the use of AI/ML to enhance data integration and perform data quality assessments, and the use of AI/ML to analyze complex RWD or to build digital twins of patients to analyze how a patient may have progressed on a placebo versus an investigational treatment. FDA also notes the potential use of AI/ML to detect a possible safety signal, or to assess outcomes captured from diverse sources (e.g., DHTs, social media) during a clinical trial.
- Postmarketing Safety Surveillance: FDA notes the ways in which post-approval pharmacovigilance can be supported by AI/ML, for instance by case processing (e.g., detecting information from source documents to help identify adverse events for individual case safety report (“ICSR”) submission), case evaluation (e.g., assessing the possibility of a causal relationship between the drug and the adverse event), and case submission (e.g., automating reporting rules for submission of ICSRs).
- Advanced Pharmaceutical Manufacturing: As noted above, CDER previously issued a discussion paper in March 2023 focused on AI/ML in drug manufacturing. Now, in the Second Discussion Paper, FDA elaborates on the ways in which advanced analytics leveraging AI/ML has already been deployed or has potential to support pharmaceutical manufacturing efforts, including enhancing process controls, increasing equipment reliability, monitoring early warnings that a manufacturing process is not in a state of control, detecting recurring problems, and preventing batch losses. FDA specifically notes the potential for AI/ML, in concert with other advanced manufacturing technologies (such as process analytical technology (“PAT”) and continuous manufacturing) to enhance and modernize pharmaceutical manufacturing, and alleviate supply chain and shortage issues. FDA identifies four specific areas in which AI/ML could be applied throughout the entire product manufacturing lifecycle: (1) optimization of process design (e.g., use of digital twins in process design optimization); (2) advanced process control implementation; (3) smart monitoring and maintenance; and (4) trending activities (such as trending of deviations, root causes, and CAPA effectiveness).
Considerations for the Use of AI/ML in Drug Development and Opportunities for Engagement with FDA
FDA acknowledges the potential for AI/ML to accelerate the drug development process and make clinical trials safer and more efficient. The Second Discussion Paper also acknowledges the need for the Agency to assess whether the use of AI/ML in these contexts introduces unique risks and harms, including the potential for limited explainability due to the complexity or proprietary nature of an AI/ML system, questions about reliability, and the potential for bias.
Accordingly, FDA notes a focus on “developing standards for trustworthy AI that address specific characteristics in areas such as explainability, reliability, privacy, safety, security, and bias mitigation.” To help address these issues, FDA intends to consider the applicability of certain overarching standards and practices for the general application of AI/ML, and to seek feedback from stakeholders to help identify specific good practices with respect to AI/ML in the context of drug development.
Overarching Standards and Practices for the Use of AI/ML
FDA intends to explore the potential utility and applicability of overarching standards and practices for the use of AI/ML that are not specific to the drug development context. These include AI/ML principles outlined in federal executive orders, the AI Plan developed by the National Institute for Standards and Technology, and AI/ML standards established by standards organizations. The Second Discussion Paper also acknowledges the potential usefulness of the Agency’s frameworks for software as a medical device (“SaMD”), such as an April 2019 discussion paper that proposed a regulatory framework for modifications to AI-based SaMD, a January 2021 AI “Action Plan” for SaMD, and October 2021 guiding principles to inform the development of Good Machine Learning Practice for AI/ML-based medical devices. It seems likely that the Agency will leverage some principles from these sources in developing AI/ML standards for drug development and the development of devices intended to be used with drugs.
Opportunity for Engagement: Request for Feedback
Although the above-referenced, overarching standards may serve as a useful starting point, FDA seeks feedback from stakeholders that highlights additional or unique considerations for AI/ML deployed in the drug development context. Specifically, FDA solicits feedback on three key areas: (1) human-led governance, accountability, and transparency; (2) quality, reliability, and representativeness of data; and (3) model development, performance, monitoring, and validation. The Agency outlines specific questions within each of these areas in the Second Discussion Paper.
- With respect to human-led governance, accountability, and transparency, FDA emphasizes the value of governance and accountability in developing trustworthy AI. The Agency seeks feedback about specific use cases in drug development that have the greatest need for regulatory clarity, what transparency means in the use of AI/ML in drug development, the barriers and facilitators of transparency in these contexts, and good practices for providing risk-based, meaningful human involvement.
- With respect to quality, reliability, and representativeness of data, FDA acknowledges that ensuring “data quality, reliability, and that the data are fit for use (i.e., relevant for the specific intended use and population) can be critical,” and highlights data-related issues such as bias, completeness and accuracy of data, privacy and security, record trails, relevance, replicability, reproducibility, and representativeness. FDA solicits feedback on key practices utilized by stakeholders to help address these issues.
- Finally, with respect to model development, performance, monitoring, and validation, FDA highlights the importance of evaluating AI/ML models over time to consider the model risk and credibility. For example, FDA acknowledges that there may be overall advantages to selecting a more traditional and parsimonious (i.e., fewer parameters) model over complex models where the models perform similarly. Additionally, the Second Discussion Paper states it may be important to examine corrective actions and real-world performance, conduct postmarket surveillance, verify the software code and calculations, and evaluate the applicability of validation assessments to the context of use. FDA solicits feedback on examples of tools, processes, approaches, and best practices being used by stakeholders to monitor and develop AI/ML models.
Submitting feedback on these questions is an important opportunity to help develop the standards that govern the use of AI/ML in drug development. The comment period closes on August 9, 2023.
Other Opportunities for Engagement
FDA also is coordinating a number of mechanisms for stakeholders to engage with the Agency on AI/ML in drug development, such as a workshop with stakeholders, public meetings, and further Critical Path Innovation, ISTAND Pilot Program, Emerging Technology Program, and Real-World Evidence Program meetings. FDA views these efforts and collaborations as providing “a foundation for a future framework or guidance.” Stakeholders should watch closely for these opportunities.
[1] For a summary and analysis of FDA’s first discussion paper, which focused on the use of AI in drug manufacturing, please see our prior blog post, “FDA Seeks Comments on Agency Actions to Advance Use of AI and Digital Health Technologies in Drug Development.” The first discussion paper, “Artificial Intelligence in Drug Manufacturing,” was issued by the Center for Drug Evaluation and Research (CDER), and is available at https://www.fda.gov/media/165743/download.
[2] For purposes of the Second Discussion Paper, FDA states that all references to “drug” or “drugs” include both human drugs and biological products.