Results

AHA - American Hospital Association

05/06/2024 | News release | Distributed by Public on 05/06/2024 10:15

AHA Response to Representative Bera on Artificial Intelligence in the Health Care Sector

The Honorable Ami Bera, M.D.
U.S. House of Representatives
172 Cannon House Office Building
Washington, DC 20515

Dear Representative Bera:

On behalf of our nearly 5,000 member hospitals, health systems and other health care organizations, and our clinician partners, including more than 270,000 affiliated physicians, 2 million nurses and other caregivers, the American Hospital Association (AHA) appreciates the opportunity to respond to your request for information regarding the current state of artificial intelligence (AI) in the health care sector.

While AI has been a part of health care for years, the emergence of generative AI tools at the end of 2022, such as ChatGPT, brought all types of AI into the public spotlight and sparked a discussion about how to use these tools safely and effectively. While ChatGPT, other generative AI tools and large language models became widely known a year and a half ago, hospitals and health systems had already been using AI - some for several years. In fact, AI has already shown measurable benefits in several areas, including diagnostics and improved operations; it also has reduced administrative tasks, as well as risks related to data quality, privacy and algorithmic bias. AI is a complex set of technologies, and its regulation requires careful thinking to understand its many subtleties, diverse applications and definitions, which are still highly fluid.

Our response to your request, however, is focused specifically on your interest in the regulatory considerations related to AI's use in health care. Regulation of AI in health care needs to be flexible to keep up with the rapid pace of innovation and allow hospitals and clinicians to safely harness the benefits of these powerful technologies for the good of their patients. There is a delicate balance between fully realizing the promise of AI, while managing risks of deploying these powerful technologies. As such, it is important to think about risk as a sliding scale when considering AI use in health care and how much human oversight is needed for a given application.

Technology is most effectively regulated based on how and where it is used, and this sector-specific approach has allowed the relevant oversight organizations to tailor the specifics of their regulation to the risks associated with the uses of the technology. AI is not a monolithic technology, and thus a one-size-fits-all approach could stifle innovation in patient care and hospital operations. Such an approach may even prove inadequate at addressing the risks to safety and privacy that are unique to health care. Just as software is regulated based on its use across different sectors, the AHA urges Congress to consider regulating AI's use in a similar manner.

Existing technology-focused regulatory frameworks, such as the guidelines used in the Food and Drug Administration's (FDA) Software as a Medical Device (SaMD) definitions, provide a solid foundation for this approach. These frameworks have been tested and refined over time, and they are already familiar to stakeholders. Adapting these frameworks to accommodate the unique aspects of AI could be a more efficient and effective approach than creating new ones from scratch. However, we are seeing the rapid development of new AI applications. If, or perhaps when, existing frameworks prove inadequate for the continually evolving landscape of AI, then it might be necessary for Congress to amend them or create new ones.

The AHA recognizes that health care applications of AI may pose novel challenges that may not be adequately addressed by existing regulatory frameworks. AI systems that provide diagnosis, prognosis or specific treatment recommendations for patients may offer significant, positive impacts on their health outcomes and quality of life. However, these systems may also raise ethical, legal and social issues, such as privacy, accountability, transparency, bias and consent.

None of these issues are addressed in the frameworks like SaMD, but they are addressed in rules such as the Office of the National Coordinator (ONC) for Health Information Technology's Health Data, Technology and Interoperability (HTI-1), which is intended to be complimentary with FDA guidance on SaMD. Thus, rather than tackle AI in health care broadly, HTI-1 establishes requirements for algorithmic transparency in clinical decision support (CDS) tools and predictive decision support interventions (DSI) by expanded and modified definitions of CDS and DSI it to incorporate AI-based technologies.

Additionally, sometimes the need for AI regulatory guardrails is handled in decidedly non-technical rules that address a specific use case or application. An example of the latter is the 2024 Medicare Advantage (MA) final rule where the Centers for Medicare & Medicaid Services places the responsibility for controlling an AI tool's potential to introduce bias on the MA plan that is using the tool: "MA organizations should, prior to implementing an algorithm or software tool, ensure that the tool is not perpetuating or exacerbating existing bias, or introducing new biases."

The AHA supports continued and rigorous debate on this topic, with consideration given to the need for specific, but highly flexible, standards and guidelines for evaluating the safety, efficacy, privacy, transparency and fairness of AI systems, as well as their impact on the patients interacting with these systems.

Sincerely,

/s/

Lisa Kider Hrobsky
Senior Vice President, Legislative and Political Affairs

Key Resources