09/13/2024 | News release | Distributed by Public on 09/13/2024 19:47
On August 29, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking yet another major development in states' efforts to regulate AI. The legislation, which draws on concepts from the White House's 2023 AI Executive Order ("AI EO"), follows months of high-profile debate and amendments and would establish an expansive AI safety and security regime for developers of "covered models." Governor Gavin Newsom (D) has until September 30 to sign or veto the bill.
If signed into law, SB 1047 would join Colorado's SB 205-the landmark AI anti-discrimination law passed in May and covered here-as another de facto standard for AI legislation in the United States in the absence of congressional action. In contrast to Colorado SB 205's focus on algorithmic discrimination risks for consumers, however, SB 1047 would address AI models that are technically capable of causing or materially enabling "critical harms" to public safety.
Covered Models. SB 1047 establishes a two-part definition of "covered models" subject to its safety and security requirements. First, prior to January 1, 2027, covered models are defined as AI models trained using a quantity of computing power that is both greater 1026 floating-point operations per second ("FLOPS") and valued at more than $100 million. This computing threshold mirrors the AI EO's threshold for dual-use foundation models subject to red-team testing and reporting requirements; the financial valuation threshold is designed to exclude models developed by small companies. Similar to the Commerce Department's discretion to adjust the AI EO's computing threshold, California's Government Operations Agency ("GovOps") may adjust SB 1047's computing threshold after January 1, 2027. By contrast, GovOps may not adjust the valuation threshold, which is indexed to inflation and must be "reasonably assessed" by the developer "using the average market prices of cloud compute at the start of training."
SB 1047 also applies to "covered model derivatives," defined as(1) "fine-tuned" covered models; (2) modified and unmodified copies of covered models; and (3) copies of covered models combined with other software. Prior to January 1, 2027, fine-tuned covered model derivatives must be fine-tuned using at least three times 1025 FLOPS of computing power worth more than $10 million. After January 1, 2027, GovOps may adjust the computing threshold.
Critical Harms & AI Safety Incidents. SB 1047 would require AI developers to report "AI safety incidents," or specific events that increase the risk of critical harms, to the California Attorney General within 72 hours after discovery. Critical harms are defined as mass casualties or at least $500 million in damages caused or materially enabled by a covered model that: (1) creates or uses chemical, biological, radiological, or nuclear ("CBRN") weapons; (2) conducts or instructs cyberattack(s) on critical infrastructure; or (3) engages in unsupervised acts that would be criminal if done by a human. Critical harms also include other grave harms to public safety and security of comparable severity.
"AI safety incidents" are defined as incidents that demonstrably increase the risk that critical harms will occur by means of the following: (1) a covered model autonomously engaging in behavior not requested by a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of a covered model's model weights; (3) critical failures of technical or administrative controls; or (4) unauthorized uses of a covered model to cause or materially enable critical harms.
Pre-Training Developer Requirements. SB 1047 would also impose requirements on developers prior to the start of training a covered model, including:
Pre-Deployment Developer Requirements. SB 1047 would impose separate requirements for developers prior to using a covered model or making a covered model available for commercial or public use, including:
Ongoing Developer Requirements. Finally, SB 1047 would require developers to annually reevaluate their policies, protections, and procedures, and impose other ongoing requirements:
Future Regulations and Guidance. SB 1047 requires GovOps to issue, by January 1, 2027, new regulations on the computational thresholds for covered models and auditing requirements for third-party auditors, in addition to guidance for preventing unreasonable risks of critical harms. The regulations and guidance must be approved by the "Board of Frontier Models," a nine-member group of AI and safety experts established by SB 1047.
SB 1047 is just one of over a dozen AI bills passed by the California legislature last month covering a range of AI-related topics including election deepfakes, generative AI content and training data, and digital replicas. The passage of SB 1047 also comes as Colorado lawmakers embark on a revision process for SB 205, as we have covered here.
* * *
Follow our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.