California’s SB 1047, an ambitious piece of legislation designed to enhance AI safety, has recently undergone significant revisions. These changes reflect a balanced approach between regulatory oversight and industry flexibility, marking a crucial shift as the bill progresses through the legislative process.
How SB 1047 Has Evolved
The revised version of SB 1047, which recently cleared the California Appropriations Committee, incorporates several key modifications. These updates aim to address concerns from major tech stakeholders and strike a balance between ensuring safety and fostering innovation in the AI sector.
Original Goals of SB 1047
Initially, SB 1047 aimed to hold AI developers accountable for severe incidents caused by their systems, including fatalities or cybersecurity breaches exceeding $500 million. The bill sought to impose strict liability on developers for catastrophic outcomes resulting from their AI technologies.
- Limited Legal Actions: One significant change is that the bill no longer allows California’s Attorney General to sue AI companies for negligent safety practices before a disaster occurs. Instead, the Attorney General can now seek injunctive relief to stop dangerous activities but must wait for a catastrophe to take further legal action. This adjustment addresses concerns about preemptive regulatory measures.
- Revised Safety Certification Requirements: The original requirement for AI labs to submit safety certifications “under penalty of perjury” has been replaced. The revised bill now requires these labs to provide public statements on their safety practices, reducing the criminal liability associated with safety claims. This simplifies compliance while still promoting transparency.
- Adjusted Safety Standards: The bill now requires developers to exercise “reasonable care” to ensure their AI models do not pose significant risks, rather than providing “reasonable assurance” of safety. This change lowers the bar for safety measures, making it easier for developers to comply while still ensuring their models are not harmful.
- Restructured Oversight: The proposed Frontier Model Division (FMD) has been replaced with the Board of Frontier Models, which will be part of the existing Government Operations Agency. The board, now expanded to nine members, will oversee safety guidelines and regulations for AI models in a more streamlined manner.
- Open Source Model Protection: New provisions protect individuals who fine-tune AI models with expenditures below $10 million from liability. Responsibility remains with the original developers of the models, providing a clearer delineation of accountability.
The revisions to SB 1047 represent a compromise between regulatory needs and industry concerns. Anthropic, a leading AI firm, played a crucial role in advocating for these changes. Senator Wiener highlighted that the amendments address key concerns raised by industry players, aiming to support both safety and innovation.
“The goal of SB 1047 has always been to enhance AI safety while fostering innovation,” said Nathan Calvin, Senior Policy Counsel for the Center for AI Safety Action Fund. “The new amendments will support that goal.”
Despite these adjustments, some critics remain dissatisfied. They argue that the amendments are merely “window dressing” and do not fully resolve the underlying issues with the bill. Martin Casado, a general partner at Andreessen Horowitz, voiced skepticism, suggesting that the changes fail to address the fundamental criticisms.
The bill will now proceed to the California Assembly floor for a final vote. If approved, it will return to the Senate for further consideration due to the recent amendments. Once both chambers approve it, the bill will be sent to Governor Gavin Newsom for final approval or veto.
Governor Newsom has not yet commented publicly on the revised bill, but he has previously shown support for maintaining California’s leadership in AI innovation. The final form of SB 1047 will likely reflect a balance between regulatory oversight and the need to nurture a dynamic tech ecosystem.
The evolution of SB 1047 marks a critical moment in AI regulation. By addressing key concerns from industry stakeholders while maintaining its core safety objectives, the bill illustrates the complex interplay between regulation and innovation. As SB 1047 advances through the legislative process, its final outcome will have a significant impact on the future of AI governance in California and could set a precedent for similar efforts nationwide.
The ongoing debate highlights the challenge of regulating rapidly advancing technologies while encouraging their development. As the bill nears its final stages, its impact on both the tech industry and broader AI safety efforts will be closely watched. The outcome will showcase how California navigates the dual imperatives of protecting the public and fostering technological progress.
Stay tuned for further updates on SB 1047 and its implications for the future of AI regulation and innovation.