Artificial intelligence (AI) is poised to transform clinical research in the coming years. The FDA recognizes it’s no longer a question of if AI will impact the industry but how we will harness its potential safely and effectively to develop new medical interventions. Their draft guidance released in early January serves as a helpful framework for assessing the credibility of AI models used in drug and biological product regulation, while underscoring the need for transparency, robust validation, and a risk-based approach.
This guidance focuses specifically on the use of AI models to produce information or data that supports regulatory decisions about the safety, effectiveness, and quality of drugs. It is important to note that it does not cover the use of AI in drug discovery or for operational efficiencies that do not impact patient safety, drug quality, or the reliability of study results. The guidance is applicable across the drug product lifecycle including nonclinical, clinical, postmarketing, and manufacturing phases.
The FDA's guidance proposes a risk-based framework to assess the credibility of AI models for specific contexts of use through a 7-step process:
Clearly state the specific question the AI model is intended to address. Answers to this question may be formulated from evidence from multiple sources (i.e. historical clinical trials, in vitro studies, etc.). These various evidentiary sources should be explicitly identified when describing the AI model's context of use (COU).
Describe in detail the role and scope of the AI model, and how the model's outputs will be used in the clinical trial. Will the AI model be the sole determinant or will it be used in combination with other information to make informed decisions?
Evaluate the AI model's risk, or the potential for its output to lead to decision-making that may yield an adverse outcome, based on two factors:
It's important to assess the risk of the AI model because the activities used to confirm the reliability of its outputs should match the level of risk and fit the specific COU.
Create a plan to establish the credibility of the AI model's outputs within its COU. This plan should describe the model and its development process and the model evaluation process. This would include detailing the model's architecture, the data used to train the model, and the performance metrics used to evaluate it.
Put the credibility assessment plan into action. It can be incredibly beneficial to engage with the FDA regarding the plan beforehand in order to establish expectations on credibility assessment and identify potential risks and how to overcome potential challenges should they arise.
Document the outcomes of the credibility assessment plan and any deviations from the plan in a credibility assessment report. The intent here is to establish the AI model’s credibility in the COU and call out any discrepancies from the credibility assessment plan developed in step 4.
Based on the results documented in the credibility assessment report, a model may or may not be appropriate for the COU. If the AI model’s credibility is deemed insufficient, sponsors can address concerns using various approaches:
The FDA’s guidance also includes a special consideration for lifecycle maintenance of the credibility of AI model outputs in certain contexts of use. Because AI models can be self-evolving, they can be sensitive to changes in inputs and performance can change over time. It's important to have a risk-based approach for ongoing model monitoring, performance evaluation and updating.
The FDA strongly encourages sponsors to engage with them early in the development process to discuss the use of AI models. This early engagement can help to set expectations for the credibility assessment process and identify potential challenges. The guidance provides various avenues for engagement, such as formal meetings and other programs.
The FDA's draft guidance on the use of AI in regulatory decision-making for drugs and biologics provides a clear, structured approach to harnessing the potential of AI while ensuring safety and credibility. By focusing on a risk-based framework, transparency, and lifecycle maintenance, the guidance outlines essential steps for integrating AI responsibly across the drug development lifecycle.