04/01/2024 | News release | Distributed by Public on 04/01/2024 18:32
On March 15, 2024, FDA's medical product centers - CBER, CDER, and CDRH - along with the Office of Combination Products (OCP) published a paper outlining their key areas of focus for the development and use of artificial intelligence (AI) across the medical product life cycle. The paper, entitled "Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together," is intended by the Agency to "provide greater transparency regarding how FDA's medical product Centers are collaborating to safeguard public health while fostering responsible and ethical innovation." The FDA paper is the latest in series of informal statements from the Agency about the use of AI in the discovery, development, manufacturing, and commercialization of medical products, as well as for medical devices that incorporate AI. Here are five key takeaways from FDA's recent paper.
Consistent with FDA's longstanding approach to regulation of medical products, FDA's paper recognizes the value of a risk-based approach for regulating AI that the Agency oversees. The paper highlights how "AI management requires a risk-based regulatory framework built on robust principles, standards, best practices, and state-of-the-art regulatory science tools that can be applied across AI applications and be tailored to the relevant medical product" and, to the extent feasible, "can be applied across various medical products and uses within the health care delivery system."
As part of this risk-based approach, the Centers also plan to leverage and continue building upon existing FDA initiatives for the evaluation and regulation of AI used in medical products, including FDA's May 2023 Discussion Paper on Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products, the Center for Drug Evaluation and Research (CDER) Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative, and the Center for Devices & Radiological Health (CDRH) January 2021 AI/ML-Based Software as a Medical Device (SaMD) Action Plan.
The paper notes that the Centers intend to develop policies that provide regulatory predictability and clarity for the use of AI, while also supporting innovation. Planned FDA guidance documents include:
The publication of these guidance documents will open the door for public comments and additional engagement opportunities, and life sciences and medical device companies should consider submitting comments.
Mitigating bias and discrimination continues to be top-of-mind at FDA. The paper highlights several demonstration projects and initiatives the Centers plan to support in an effort to identify and reduce the risk of biases in AI tools, including:
These actions align with the Agency's overarching efforts to develop methodologies for identification and elimination of bias, as well as President Biden's October 2023 AI Executive Order that called for federal guidance and resources on the incorporation of equity principles in AI-enabled technologies used in the health sector, the use of disaggregated data on affected populations and representative population data sets when developing new models, and the monitoring of algorithmic performance against discrimination and bias.
The Centers intend to support various projects and initiatives centered around performance monitoring and ensuring reliability throughout the total product life cycle. Specifically, the Centers intend to support:
Real-world performance monitoring and ensuring quality throughout the total product life cycle has been a hot topic for some time. For example, President Biden's AI Executive Order directed the formation of an AI Task Force to, in part, identify guidance and resources on long-term and real-world performance monitoring of AI technologies in the health sector, including "clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users." Stakeholders have asked FDA for clarity on best practices for real-world performance monitoring for AI/ML-based software in the past, and FDA's 2021 AI Action Plan stated that the Agency would support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis and developing frameworks for gathering and utilizing real-world performance metrics as well as thresholds and performance evaluations for the metrics. Additionally, FDA's May 2023 AI Discussion Paper expressed the importance of evaluating AI/ML models over time to consider the model risk and credibility, and solicited feedback on examples of best practices being used by stakeholders to monitor AI/ML models. FDA's collaborations with stakeholders on these efforts over the past years could inform future guidance.
The paper highlights the importance of the Centers' current collaboration with a variety of stakeholders, including developers, patient groups, academia, and global regulators, in cultivating a patient-centered regulatory approach that emphasizes collaboration and health equity. The paper notes the Centers' intent to continue fostering these collaborative partnerships, including by continuing to solicit input from interested parties on "critical aspects" of the use of AI in medical products such as transparency, explainability, governance, bias, cybersecurity, and quality assurance.
Perhaps in an effort to facilitate collaboration with various stakeholders, the Director of FDA's Digital Health Center of Excellence, Troy Tazbaz, recently joined the Board of Directors for the Coalition for Health AI. He joins Micky Tripathi, National Coordinator for Health Information Technology within the Department of Health and Human Services (HHS), and several other representatives from academia, industry, and medical centers. Tazbaz and Tripathi also will serve on CHAI's "Government Advisory Board" along with Melanie Fontes Rainer, director of the Office for Civil Rights within HHS, and several other representatives from the White House Office of Science and Technology Policy, the Centers for Disease Control and Prevention, the Centers for Medicare & Medicaid Services, the Veterans Health Administration, and the Advanced Research Projects Agency for Health.
The paper also notes the Centers' intention to continue to work closely with global collaborators to "promote international cooperation on standards, guidelines, and best practices to encourage consistency and convergence in the use and evaluation of AI across the medical product landscape." FDA has collaborated with Health Canada and UK's MHRA in the past to develop guiding principles for Good Machine Learning Practices and PCCPs for machine learning-enabled medical devices. Also, recently, FDA took a step toward international harmonization by issuing a proposed rule to amend the Quality System Regulation to incorporate by reference international standard ISO 13485. These actions indicate that regulators are working towards a united front through close alignment on best practices and standards.
Looking Ahead
We expect to see many more policies, frameworks, guidance documents, and initiatives centered around AI in the coming months. It remains to be seen, however, how FDA's approach to AI will intersect with broader efforts to regulate AI. For example, emerging proposals to regulate AI could potentially apply to AI that also is regulated by FDA, but few address the overlap with FDA's existing medical product authorities. For instance, some proposals focus on types of AI technologies (e.g., requirements to label all content generated by generative AI regardless of the intended use), whereas other approaches take a sector-specific approach and recognize that FDA's existing regulatory frameworks already govern certain uses of AI (e.g., Senator Cassidy's white paper on the deployment of AI in healthcare settings, which disfavored a one-size-fits-all approach to AI regulation and instead called for the leveraging of existing frameworks).
But even sector-specific approaches may result in regulatory requirements that overlap with FDA requirements for FDA-regulated AI. For example, in January 2024, HHS's ONC published a final rule revising the certification requirements for health IT developers, which included requirements for AI-based "predictive decision support interventions" enabled by or interfacing with health IT. Many predictive decision support interventions under the ONC final rule may also be FDA-regulated medical devices. While ONC stated that it collaborated with FDA to maximize alignment, ultimately, developers of medical device software that also is a predictive decision support intervention will need to assess compliance with both FDA's and ONC's requirements.
In short, it will be critical to monitor developments and craft engagement strategies as policy-makers continue to collaborate and draw new lines around AI regulation.