Angus S. Jr. King

04/16/2024 | Press release | Distributed by Public on 04/16/2024 10:57

King, Colleagues Unveil Bipartisan Framework to Identify, Minimize Artificial Intelligence Risks

WASHINGTON, D.C.-Today, in a bipartisan letter to the Senate artificial intelligence (AI) working group leaders, U.S. Senators Angus King (I-ME), Co-Chair of the Cyberspace Solarium Commission, Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS) unveiled the first congressional framework to deal exclusively with the extreme risks posed by future developments in advanced AI models. The senators' framework would establish federal oversight of frontier model hardware, development, and deployment to mitigate AI-enabled extreme risks from biological, chemical, cyber, and nuclear threats.

As Congress considers how to approach new technology developments, the senators' framework aims to prioritize the national security implications of AI while ensuring our domestic AI industry is able to develop and maintain an advantage over foreign adversaries. This framework is limited to frontier models-the most advanced AI models that are still yet to be developed.


"In the ever-evolving global threat landscape, the United States has to stay one step ahead of new technologies to protect both our national security and interests at home and abroad - and that means moving carefully, warily, and thoughtfully into an Artificial Intelligence future," said Senator King. "This AI framework provides critical guidelines for federal oversight of AI technology so that it cannot be misused by bad actors looking to cause harm. We must ask important questions now to wisely navigate our next steps and decisions. Thanks to my colleagues for working together on a solution that promotes American development while safeguarding the public against biological, chemical, cyber, and nuclear threats."

"AI has the potential to dramatically improve and transform our way of life, but it also comes with enormous risks to national security and our humanity at large," Senator Romney said. "My colleagues and I have spent the last several months developing a framework which would create safeguards and provide oversight of frontier AI models aimed at preventing foreign adversaries and bad actors from misusing advanced AI to cause widespread harm. It is my hope that our proposal will serve as a starting point for discussion on what actions Congress should take on AI-without hampering American innovation."

"Artificial intelligence models are being adopted at a rapid pace. While AI has massive potential to benefit society, we must recognize AI's security threats and ethical issues to ensure it's adopted in a manner that recognizes and mitigates these risks. This bipartisan framework addresses key catastrophic challenges posed by AI and the security protocols needed to safeguard against them. We can't wait to act. Responsible oversight must keep pace with technology and responsibly support innovation, opportunity, and discovery," said Senator Reed.

"The evolution of artificial intelligence is an opportunity for U.S. innovation, efficiency and strategic advantage," said Senator Moran. "However, we must responsibly harness the power of AI and make certain we are mitigating extreme risks that would threaten our national security. My colleagues and I developed this proposal to begin the discussion regarding how the U.S. can mitigate national security risks in a manner that ensures innovators are still able to secure a competitive edge over our adversaries in this critical technology area."

Artificial intelligence (AI) has the potential to dramatically improve and transform our way of life, but also presents a spectrum of risks that could be harmful to the American public, some of which could have catastrophic effects. Extremely powerful frontier AI could be misused by foreign adversaries, terrorists, and less sophisticated bad actors to cause widespread harm and threaten U.S. national security. Experts from across the U.S. government, industry, and academia believe that advanced AI could one day enable or assist in the development of biological, chemical, cyber, or nuclear weapons. While Congress considers how to approach new technology developments, we must prioritize AI's potential national security implications. New laws or regulations should protect America's competitive edge and avoid discouraging innovation and discovery.

The Romney, Reed, Moran, King framework establishes federal oversight of frontier AI hardware, development, and deployment to mitigate AI-enabled extreme risks-requiring the most advanced model developers to guard against biological, chemical, cyber, or nuclear risks. An agency or federal coordinating body would oversee implementation of new safeguards, which would apply to only the very largest and most advanced models. Such safeguards would be reevaluated on a recurring basis to anticipate evolving threat landscapes and technology.

As Co-Chair of the Cyberspace Solarium Commission (CSC), Senator King is recognized as one of Congress' leading experts on cyber defense and as a strong advocate for a forward-thinking cyber strategy that emphasizes layered cyber deterrence. Since it officially launched in April 2019, dozens of CSC recommendations have been enacted into law, including the creation of a National Cyber Director. Senator King is also a senior member of the Senate Armed Services Committee, including chair of the subcommittee on Strategic Forces. He has been a steady voice on the need to address the growing nuclear capacity of our adversaries and has expressed concern about Russia and China's emerging "nightmare weapon" hypersonic missiles.

Full text of the two-pager-which includes more information on the applicable frontier models and oversight authorities-can be found here.

The full text of the letter can be found here or below.

+++

Dear Leader Schumer and Senators Rounds, Heinrich, and Young,

We appreciate your efforts over the past year to educate senators and staff on both the opportunities and risks posed by developments in artificial intelligence (AI). As the Senate's AI Insight Forums, scientific research, and broader policy discussions have highlighted, advancements in artificial intelligence have the potential to dramatically improve and transform our way of life, but also present a broad spectrum of risks that could be harmful to the American public. Even as we focus on the tremendous benefits, experts have warned that AI could perpetuate disinformation,1 fraud,2 bias,3 and privacy concerns.4 Others have voiced concerns that AI could pose threats to election integrity5 and the future of the workforce.6 As you develop a framework for legislation, considering solutions to these problems will be important. However, any comprehensive framework to address risks from AI should also include measures to guard against the potential catastrophic risks with respect to biological, chemical, cyber, and nuclear weapons.

According to the U.S. government, academia, and distinguished experts, advancements in AI have the potential to be misused by bad actors. The Department of Defense,7 the Department of State,8 the U.S. Intelligence Community,9 and the National Security Commission on Artificial Intelligence,10 as well as senior officials at the Department of Energy,11 Argonne National Laboratory,12 the Cybersecurity and Infrastructure Security Agency,13 and the National Counterterrorism Center,14 have underscored that advanced AI poses risks to U.S. national security, including the development of biological, chemical, cyber, or nuclear weapons.

A September 2023 hearing titled, "Advanced Technology: Examining Threats to National Security," held by the Senate Homeland Security and Governmental Affairs Subcommittee on Emerging Threats and Spending Oversight, heard testimony that advanced AI models could facilitate or assist in the development of extreme national security risks, and that the U.S. government may lack authorities to adequately respond to such risks posed by broadly capable, general purpose frontier AI models.15 In a worst-case scenario, these models could one day be leveraged by terrorists or adversarial nation state regimes to cause widespread harm or threaten U.S. national security.

At another September 2023 hearing before the Senate Energy and Natural Resources Committee, Dr. Rick Stevens, the Associate Laboratory Director for Computing, Environment, and Life Sciences at the Argonne National Laboratory, testified that in the future, "A small group working in secret with sufficiently powerful AI tools could develop a novel chemical, biological, or cyber threat. We will need to transform how we manage the risks posed by bad actors using the same AI tools we are using to improve science and advance society."16

The overlap between AI and biotechnology could lead to "the deliberate and incidental creation" of novel public health risks, according to the Office of Intelligence and Analysis (I&A) at the Department of Homeland Security (DHS).17 Researchers at Carnegie Mellon have found that large language models (LLMs) can assist in biological and chemical research but also "raise substantial concerns about the safety and potential dual use consequences, particularly in relation to the proliferation of illicit activities and security threats." 18 Other findings from the RAND Corporation,19 Gryphon Scientific,20 and individuals affiliated with the Massachusetts Institute of Technology, Harvard University, SecureBio, and SecureDNA21 highlight that certain AI models could produce outputs that could assist in the development of bioweapons or execution of a biological attack. Currently, much of this information can be found online by a dedicated party, particularly if they have domain expertise; however, the risks become clearer when we consider the implications of having this knowledge aggregated in one tool, accessible to non-experts who may be using simple prompts.22

While powerful AI models may be beneficial for cybersecurity defenses, they can also be leveraged to bolster cyber offensive capabilities to assist bad actors in creating customized malware or automating cyber attacks at a larger scale and higher speed.23 According to DHS I&A, the proliferation of AI could help facilitate "larger-scale, faster, efficient, and more evasive cyber attacks."24 FBI Director Christopher Wray likewise warned that "AI is going to enable threat actors to develop increasingly powerful, sophisticated, customizable, and scalable capabilities, and it's not going to take them long to do it."25

One alarming study found that red-teaming efforts produced instructions from an LLM on how to build a dirty bomb. The author notes that the results of their initial efforts contain "information that is broadly available online … however, additional questions yielded more precise estimations and recommendations … A would-be terrorist might not know where to find detailed and accurate instructions for building weapons of mass destruction, but could potentially circumvent that crucial barrier by simply tricking a publicly available AI model."26

U.S. allies have also identified risks posed by advanced AI models. The U.K. Department for Science, Innovation & Technology released a report which found that "[f]rontier AI may help bad actors to perform cyberattacks, run disinformation campaigns and design biological or chemical weapons. Frontier AI will almost certainly continue to lower the barriers to entry for less sophisticated threat actors."27

President Biden's Executive Order 14110, released this past October, echoed the concern over catastrophic risk through its focus on chemical, biological, radiological, nuclear (CBRN) risks and cyber risks. The E.O. requires the National Institute of Standards and Technology (NIST) to establish guidance for the evaluation of AI-enabled cyber and biological harms to assist in the development of safe and secure AI models. The Department of Energy must also develop tools to assess whether AI model outputs could lead to CBRN, cyber, and related security threats.28

The E.O. also sets reporting requirements for advanced AI developers to inform the Department of Commerce on the development of the most advanced frontier models, initially defined as models trained on a quantity of computing power greater than 1026 operations. Entities that acquire, develop, or possess large-scale computing clusters are also subject to reporting requirements. Additionally, cloud service providers must report on training runs for the most advanced frontier models when they involve transactions with foreign persons.

Congress should consider a permanent framework to mitigate extreme risks. This framework should also serve as the basis for international coordination to mitigate extreme risks posed by AI. This letter is an attempt to start a dialogue about the need for such a framework, which would be in addition to, not at the exclusion of, proposals focused on other risks presented by developments in AI.

Under this potential framework, the most advanced model developers in the future would be required to safeguard against four extreme risks - the development of biological, chemical, cyber, or nuclear weapons. An agency or federal coordinating body would be tasked to oversee the implementation of these proposed requirements, which would apply to only the very largest and most advanced models. Such requirements would be reevaluated on a recurring basis as we gain a better understanding of the threat landscape and the technology.

The American private sector is the engine that makes our economy the envy of the world. Whatever Congress does to address the risks of AI, we must ensure that our domestic AI industry is able to develop and maintain an advantage over foreign adversaries. We also must ensure that any new requirements placed on industry do not bar new entrants, who will help drive innovation and discovery. We hope this letter generates engagement and feedback from experts, industry, policymakers, and other stakeholders in the weeks to come, which will be necessary for us to create a framework that can become law.

We look forward to working with you on these ideas and other matters related to AI this year.

Sincerely,

###