02/10/2025 | News release | Distributed by Public on 02/10/2025 07:18
The pharmaceutical industry is on the brink of a transformative shift, with artificial intelligence (AI) increasingly being leveraged across the drug product lifecycle. Recognizing this, the U.S. Food and Drug Administration (FDA) released a draft guidance for industry and other interested parties entitled, "Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products" (1), aimed at helping sponsors and stakeholders navigate the use of AI for submissions to regulatory bodies. This comprehensive document lays out a structured framework to ensure AI-driven tools are credible, reliable, and effective when used to support regulatory decisions concerning drug safety, efficacy, and quality.
If you're a scientist or industry professional exploring AI applications in pharmaceutical development, this draft guidance offers relevant insights. Here, The Regulatory Strategies Center of Excellence (RS COE) at Simulations Plusdistills its key recommendations and discusses some of the implications.
At the heart of the FDA's guidance is a risk-based credibility assessment framework designed to establish and evaluate the credibility of AI models. The risk-based credibility assessment framework is not new and has been applied in many other types of models, especially highly complex models, such as quantitative systems pharmacology (QSP) models. More details on the risk-based assessment framework can be found in the recently published ICH M15 Guideline on general principles for model-informed drug development (MIDD) (2), previous FDA publications (3, 4), as well as published NDA (New Drug Application) / BLA (Biologics License Application) reviews (5).
In the section that follows, each step in the risk-based credibility assessment framework is detailed, highlighting the key takeaways, and offering our interpretation of the guidance. This analysis is informed by a combination of collective industry insights and regulatory experiences.
The first step is to articulate the specific question the AI model aims to address. The question of interest could be any potential drug development question and should not be constrained by the types of models. The guidance provides two hypothetical examples of questions of interest. You may find that those questions could be perfectly applicable for a semi-mechanistic pharmacokinetic / pharmacodynamic model. Real examples of questions of interest could be found in previous publications (3)and NDA / BLA reviews (5).
The COU specifies the AI model's purpose and boundaries. It describes what the model is intended to do, how its outputs will be used, and whether other evidence will complement its predictions. For instance, an AI model used in manufacturing to assess vial fill levels might supplement, but not replace, traditional quality control methods. These sources of evidence should be stated when describing the AI model's COU in step 2 and are relevant when determining the model influence in step 3.
Model risk is assessed based on two factors:
For example, in clinical settings, a high-risk model might directly determine patient monitoring protocols for a serious adverse event, making its accuracy and reliability critical.
Although the scale for model influence and decision consequence is not clearly defined in the current guidance, a three-level (low, medium, and high) score is defined in the ICH M15 MIDD guideline (2).
Performing the model risk assessment is a milestone in the risk-based credibility assessment framework, as this outcome directly impacts the model performance acceptance criteria which will be laid out in the credibility assessment plan.
This step involves crafting a detailed plan to evaluate the AI model's credibility. Credibility assessment activities should be based on the question of interest, the COU, and model risk. Key elements of the plan include:
This section of the guidance provides a detailed outline of a potential modeling and simulation plan. We recommend the sponsors work with the regulatory agency to be in agreement on model risk assessment and the credibility assessment plan early and prior to execution. It is also encouraged that the sponsor discusses the timelines of execution and reporting with the regulatory agency at this stage.
Sponsors are encouraged to engage with the FDA to ensure the credibility plan aligns with regulatory expectations. Execution should follow the predefined steps, while addressing any unforeseen challenges and motivating and documenting deviations from the plan.
Of note, the FDA has not provided guidance regarding how the regulatory agency will be involved / monitoring / inspecting the execution, especially for high-risk cases.
All results from the credibility assessment should be documented in a "credibility assessment report." This report should include findings, justifications for any deviations, and insights into model performance.
In general, in addition to the credibility assessment report, all the associated modeling files / scripts that were used to generate the outputs should also be submitted to the FDA so that the reviewers could replicate the key simulations. For newer modeling and simulation tools, there might be unforeseen circumstances where the submission could be delayed. The guidance indicates that "submission of the credibility assessment report should be discussed with the FDA." We strongly recommend that sponsors closely engage with the Agency on the submission activities.
The final step evaluates whether the AI model's credibility is sufficient for its intended COU. If inadequacies are identified, sponsors may need to refine the model, gather additional data, or alter its application.
In this section, the FDA offers a few options for sponsors once a model credibility is deemed insufficiently established for the model risk. We encourage the sponsors actively explore these options during the execution phase rather than waiting till the last step.
AI models are not static. They evolve over time as new data and insights become available. The FDA emphasizes the importance of life cycle maintenance to ensure models remain fit for purpose throughout their deployment. This involves:
For instance, in pharmaceutical manufacturing, changes to production processes or data inputs might necessitate retraining or reevaluating the AI model to maintain its accuracy and reliability.
A life cycle management plan for the AI model could be included in the marketing application to proactively obtain feedback from the Agency.
One of the most critical aspects of AI model credibility is the quality and management of data. The FDA highlights the need for:
Additionally, transparency in model development and evaluation is essential. This includes documenting how data were collected, processed, and used, as well as providing a clear rationale for model design choices.
Early and proactive engagement with the FDA is strongly encouraged. Sponsors can leverage formal meetings and specialized programs to discuss AI models and their regulatory implications. Examples of engagement options include:
The program meetings are dedicated to a specific program and could cover all aspects of the program development including but not limited to preclinical, clinical, clinical pharmacology, CMC (chemistry, manufacturing, and controls), biopharmaceutics, and regulatory questions. Each meeting is generally designed to be an hour long, and therefore, the number of questions for FDA included in each meeting package should be limited.
Other engagement options, such as the MIDD meeting, might provide opportunities for a more in-depth discussion about the AI model. The sponsor should evaluate program timelines, specific questions to the FDA, and aspects other than the AI model to select an appropriate mechanism for interaction with the FDA on AI models.
The FDA has stressed in several places in the draft guidance that they would like sponsors to meet with them early in the AI model development process. Therefore, for sponsors who seek external collaborations and assistance to develop an AI model, it is also critical to engage with the RS COE at Simulations Plus, early in the process. Partnering with Simulations Plus offers the advantage of working with an organization that has a well-established reputation and a strong track record of collaboration with regulatory agencies. With 20-25 years of experience in applying machine learning/AI across key areas such as ADME property prediction, AI-driven drug design, high-throughput PBPK and QSP, Simulation Plus brings deep expertise to the table.
Our extensive experience positions us as a valuable partner in facilitating and supporting regulatory interactions around AI model implementation. Our proficiency in navigating regulatory frameworks ensures that AI tools are integrated into drug development processes in compliance with the guidance. To achieve this, it is critical that the regulatory agencies are provided with full transparency with respect to the elements outlined above to effectively support the selected strategy.
The FDA's guidance provides several illustrative examples of AI applications in pharmaceutical development:
These examples highlight the diverse potential of AI to enhance decision-making across the drug lifecycle, from clinical trials to post-marketing surveillance.
The FDA's draft guidance marks a significant step toward integrating AI into pharmaceutical development. For scientists and industry stakeholders, this guidance underscores the importance of:
By adhering to these principles, the pharmaceutical industry can harness the full potential of AI while ensuring safety, effectiveness, and quality for drugs.
Artificial intelligence holds significant promise for transforming pharmaceutical development. However, its effective implementation requires a careful balance of innovation and regulatory rigor. The FDA's draft guidance provides a roadmap for navigating this complex terrain, emphasizing a risk-based approach to establishing AI credibility and maintaining performance over time.
For industry professionals, this guidance is not just a regulatory requirement but an opportunity to lead the way in developing safe, effective, and innovative AI-driven solutions. Specifically, with the right approach, pharmaceutical companies can not only drive innovation but also build trust and transparency with regulators, ultimately advancing the adoption of AI to improve health outcomes.
If your organization is interested in incorporating AI into its drug development programs, The RS COE at Simulations Plus is equipped with the expertise to provide guidance and support. Learn more about how we can help.