Covington & Burling LLP

04/01/2024 | News release | Distributed by Public on 04/01/2024 18:32

FDA Medical Product Centers Continue Focus on AI

On March 15, 2024, FDA's medical product centers - CBER, CDER, and CDRH - along with the Office of Combination Products (OCP) published a paper outlining their key areas of focus for the development and use of artificial intelligence (AI) across the medical product life cycle. The paper, entitled "Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together," is intended by the Agency to "provide greater transparency regarding how FDA's medical product Centers are collaborating to safeguard public health while fostering responsible and ethical innovation." The FDA paper is the latest in series of informal statements from the Agency about the use of AI in the discovery, development, manufacturing, and commercialization of medical products, as well as for medical devices that incorporate AI. Here are five key takeaways from FDA's recent paper.

  1. The Centers continue to emphasize a risk-based regulatory framework for AI that builds upon existing FDA initiatives.

Consistent with FDA's longstanding approach to regulation of medical products, FDA's paper recognizes the value of a risk-based approach for regulating AI that the Agency oversees. The paper highlights how "AI management requires a risk-based regulatory framework built on robust principles, standards, best practices, and state-of-the-art regulatory science tools that can be applied across AI applications and be tailored to the relevant medical product" and, to the extent feasible, "can be applied across various medical products and uses within the health care delivery system."

As part of this risk-based approach, the Centers also plan to leverage and continue building upon existing FDA initiatives for the evaluation and regulation of AI used in medical products, including FDA's May 2023 Discussion Paper on Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products, the Center for Drug Evaluation and Research (CDER) Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative, and the Center for Devices & Radiological Health (CDRH) January 2021 AI/ML-Based Software as a Medical Device (SaMD) Action Plan.

  1. FDA plans to release several AI guidance documents this year, providing an opportunity for engagement.

The paper notes that the Centers intend to develop policies that provide regulatory predictability and clarity for the use of AI, while also supporting innovation. Planned FDA guidance documents include:

  • Draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions. As background, in June 2023, FDA released a final guidance entitled "Content of Premarket Submissions for Device Software Functions." The title of the proposed draft guidance on CDRH's guidance agenda suggests that the Agency's premarket submission recommendations may differ for AI-enabled device software functions, and it is likely that the new draft guidance will directly address novel premarket submission issues raised by incorporating AI into device software functions.
  • Draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products. The title of this planned draft guidance is similar to FDA's August 2023 final guidance entitled "Considerations for the Use of Real-World Data and Real-World Evidence to Support Regulatory Decision-Making for Drug and Biological Products," which focused on RWD/E and did not discuss AI. The planned draft guidance on CDER's guidance agenda may provide additional insights on the use of AI in RWE studies. FDA also has previously given attention to internal infrastructure needed to assess regulatory submissions that include data from Digital Health Technologies (DHTs). For example, in March 2023 the Agency issued a Framework for the Use of DHTs in Drug and Biological Product Development that stated FDA plans to "enhance its IT capabilities to support the review of DHT-generated data," including by establishing "a secure cloud technology to enhance its infrastructure and analytics environment that will enable FDA to effectively receive, aggregate, store, and process large volumes of data." The new proposed draft guidance could build upon the themes outlined in this framework, with a specific focus on AI.
  • Final guidance on marking submission recommendations for predetermined change control plans for AI-enabled medical device software functions. FDA plans to finalize the Agency's April 2023 draft guidance on predetermined change control plans (PCCPs). PCCPs describe planned changes that may be made to a device that otherwise would require premarket review by the Agency, facilitating iterative improvements through modifications to an AI- or machine learning-enabled device while continuing to provide a reasonable assurance of device safety and effectiveness. The final guidance likely will incorporate or address any feedback the Agency has received on the draft guidance and may also address real-world challenges the Agency has faced or "lessons learned" from reviewing submitted PCCPs to date.

The publication of these guidance documents will open the door for public comments and additional engagement opportunities, and life sciences and medical device companies should consider submitting comments.

  1. Mitigating bias continues to be a front-burner issue.

Mitigating bias and discrimination continues to be top-of-mind at FDA. The paper highlights several demonstration projects and initiatives the Centers plan to support in an effort to identify and reduce the risk of biases in AI tools, including:

  • Regulatory science efforts to develop methodology for evaluating AI algorithms, identifying and mitigating bias, and ensuring the robustness and resilience of AI algorithms to withstand changing clinical inputs and conditions.
  • Demonstration projects that (1) highlight different points where bias can be introduced in the AI development life cycle and how it can be addressed, including through risk management; and (2) consider health inequities associated with the use of AI in medical product development to promote equity and ensure data representativeness, leveraging ongoing diversity, equity, and inclusion efforts.
  • Best practices for documenting and ensuring that data used to train and test AI models are fit for use, including adequately representing the target population.
  • Considerations for evaluating the safe, responsible, and ethical use of AI in the medical product life cycle.

These actions align with the Agency's overarching efforts to develop methodologies for identification and elimination of bias, as well as President Biden's October 2023 AI Executive Order that called for federal guidance and resources on the incorporation of equity principles in AI-enabled technologies used in the health sector, the use of disaggregated data on affected populations and representative population data sets when developing new models, and the monitoring of algorithmic performance against discrimination and bias.

  1. The paper focuses on the total product life cycle.

The Centers intend to support various projects and initiatives centered around performance monitoring and ensuring reliability throughout the total product life cycle. Specifically, the Centers intend to support:

  • Demonstration projects that support the ongoing monitoring of AI tools to ensure adherence to standards and that the tools maintain performance and reliability throughout their life cycle.
  • A framework and strategy for quality assurance of AI-enabled tools or systems used in the medical product life cycle, which emphasize continued monitoring and mitigation of risks.
  • Best practices for long-term safety and real-world performance monitoring of AI-enabled medical products.
  • Educational initiatives for regulatory bodies, health care professional, patients, researchers, and industry as they navigate the safe and responsible use of AI in medical product development and in medical products.

Real-world performance monitoring and ensuring quality throughout the total product life cycle has been a hot topic for some time. For example, President Biden's AI Executive Order directed the formation of an AI Task Force to, in part, identify guidance and resources on long-term and real-world performance monitoring of AI technologies in the health sector, including "clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users." Stakeholders have asked FDA for clarity on best practices for real-world performance monitoring for AI/ML-based software in the past, and FDA's 2021 AI Action Plan stated that the Agency would support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis and developing frameworks for gathering and utilizing real-world performance metrics as well as thresholds and performance evaluations for the metrics. Additionally, FDA's May 2023 AI Discussion Paper expressed the importance of evaluating AI/ML models over time to consider the model risk and credibility, and solicited feedback on examples of best practices being used by stakeholders to monitor AI/ML models. FDA's collaborations with stakeholders on these efforts over the past years could inform future guidance.

  1. The paper emphasizes the importance of collaboration and international harmonization.

The paper highlights the importance of the Centers' current collaboration with a variety of stakeholders, including developers, patient groups, academia, and global regulators, in cultivating a patient-centered regulatory approach that emphasizes collaboration and health equity. The paper notes the Centers' intent to continue fostering these collaborative partnerships, including by continuing to solicit input from interested parties on "critical aspects" of the use of AI in medical products such as transparency, explainability, governance, bias, cybersecurity, and quality assurance.

Perhaps in an effort to facilitate collaboration with various stakeholders, the Director of FDA's Digital Health Center of Excellence, Troy Tazbaz, recently joined the Board of Directors for the Coalition for Health AI. He joins Micky Tripathi, National Coordinator for Health Information Technology within the Department of Health and Human Services (HHS), and several other representatives from academia, industry, and medical centers. Tazbaz and Tripathi also will serve on CHAI's "Government Advisory Board" along with Melanie Fontes Rainer, director of the Office for Civil Rights within HHS, and several other representatives from the White House Office of Science and Technology Policy, the Centers for Disease Control and Prevention, the Centers for Medicare & Medicaid Services, the Veterans Health Administration, and the Advanced Research Projects Agency for Health.

The paper also notes the Centers' intention to continue to work closely with global collaborators to "promote international cooperation on standards, guidelines, and best practices to encourage consistency and convergence in the use and evaluation of AI across the medical product landscape." FDA has collaborated with Health Canada and UK's MHRA in the past to develop guiding principles for Good Machine Learning Practices and PCCPs for machine learning-enabled medical devices. Also, recently, FDA took a step toward international harmonization by issuing a proposed rule to amend the Quality System Regulation to incorporate by reference international standard ISO 13485. These actions indicate that regulators are working towards a united front through close alignment on best practices and standards.

Looking Ahead

We expect to see many more policies, frameworks, guidance documents, and initiatives centered around AI in the coming months. It remains to be seen, however, how FDA's approach to AI will intersect with broader efforts to regulate AI. For example, emerging proposals to regulate AI could potentially apply to AI that also is regulated by FDA, but few address the overlap with FDA's existing medical product authorities. For instance, some proposals focus on types of AI technologies (e.g., requirements to label all content generated by generative AI regardless of the intended use), whereas other approaches take a sector-specific approach and recognize that FDA's existing regulatory frameworks already govern certain uses of AI (e.g., Senator Cassidy's white paper on the deployment of AI in healthcare settings, which disfavored a one-size-fits-all approach to AI regulation and instead called for the leveraging of existing frameworks).

But even sector-specific approaches may result in regulatory requirements that overlap with FDA requirements for FDA-regulated AI. For example, in January 2024, HHS's ONC published a final rule revising the certification requirements for health IT developers, which included requirements for AI-based "predictive decision support interventions" enabled by or interfacing with health IT. Many predictive decision support interventions under the ONC final rule may also be FDA-regulated medical devices. While ONC stated that it collaborated with FDA to maximize alignment, ultimately, developers of medical device software that also is a predictive decision support intervention will need to assess compliance with both FDA's and ONC's requirements.

In short, it will be critical to monitor developments and craft engagement strategies as policy-makers continue to collaborate and draw new lines around AI regulation.