Jack Reed

04/16/2024 | Press release | Distributed by Public on 04/16/2024 10:42

Romney, Reed, Moran, King Unveil Framework to Mitigate Extreme AI Risks

April 16, 2024

Romney, Reed, Moran, King Unveil Framework to Mitigate Extreme AI Risks

First of its kind framework establishes federal oversight of frontier AI to guard against biological, chemical, cyber, and nuclear threats

WASHINGTON, DC -- Today, in a letter to the Senate artificial intelligence (AI) working group leaders, U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME) unveiled the first congressional framework to deal exclusively with the extreme risks posed by future developments in advanced AI models. The senators' framework would establish federal oversight of frontier model hardware, development, and deployment to mitigate AI-enabled extreme risks from biological, chemical, cyber, and nuclear threats.

As Congress considers how to approach new technology developments, the senators' framework aims to prioritize the national security implications of AI while ensuring our domestic AI industry is able to develop and maintain an advantage over foreign adversaries. This framework is limited to frontier models-the most advanced AI models that are still yet to be developed.

"AI has the potential to dramatically improve and transform our way of life, but it also comes with enormous risks to national security and our humanity at large," Senator Romney said. "My colleagues and I have spent the last several months developing a framework which would create safeguards and provide oversight of frontier AI models aimed at preventing foreign adversaries and bad actors from misusing advanced AI to cause widespread harm. It is my hope that our proposal will serve as a starting point for discussion on what actions Congress should take on AI-without hampering American innovation."

"Artificial intelligence models are being adopted at a rapid pace. While AI has massive potential to benefit society, we must recognize AI's security threats and ethical issues to ensure it's adopted in a manner that recognizes and mitigates these risks. This bipartisan framework addresses key catastrophic challenges posed by AI and the security protocols needed to safeguard against them. We can't wait to act. Responsible oversight must keep pace with technology and responsibly support innovation, opportunity, and discovery," said Senator Reed.

"The evolution of artificial intelligence is an opportunity for U.S. innovation, efficiency and strategic advantage," said Senator Moran. "However, we must responsibly harness the power of AI and make certain we are mitigating extreme risks that would threaten our national security. My colleagues and I developed this proposal to begin the discussion regarding how the U.S. can mitigate national security risks in a manner that ensures innovators are still able to secure a competitive edge over our adversaries in this critical technology area."

"In the ever-evolving global threat landscape, the United States has to stay one step ahead of new technologies to protect both our national security and interests at home and abroad - and that means moving carefully, warily, and thoughtfully into an Artificial Intelligence future," said Senator King. "This AI framework provides critical guidelines for federal oversight of AI technology so that it cannot be misused by bad actors looking to cause harm. We must ask important questions now to wisely navigate our next steps and decisions. Thanks to my colleagues for working together on a solution that promotes American development while safeguarding the public against biological, chemical, cyber, and nuclear threats."

Background:

Artificial intelligence (AI) has the potential to dramatically improve and transform our way of life, but also presents a spectrum of risks that could be harmful to the American public, some of which could have catastrophic effects. Extremely powerful frontier AI could be misused by foreign adversaries, terrorists, and less sophisticated bad actors to cause widespread harm and threaten U.S. national security. Experts from across the U.S. government, industry, and academia believe that advanced AI could one day enable or assist in the development of biological, chemical, cyber, or nuclear weapons. While Congress considers how to approach new technology developments, we must prioritize AI's potential national security implications. New laws or regulations should protect America's competitive edge and avoid discouraging innovation and discovery.

The Romney, Reed, Moran, King framework would establish federal oversight of frontier AI hardware, development, and deployment to mitigate AI-enabled extreme risks-requiring the most advanced model developers to guard against biological, chemical, cyber, or nuclear risks. An agency or federal coordinating body would oversee implementation of new safeguards, which would apply to only the very largest and most advanced models. Such safeguards would be reevaluated on a recurring basis to anticipate evolving threat landscapes and technology.

Responses from stakeholders and the public should be submitted by May 17 to: [email protected]

  • Print
  • Email
  • Share
  • Tweet
See More: