Exclusive: Renowned Experts Pen Support for California’s Landmark AI Safety Bill
On August 7, a group of renowned professors co-authored a letter urging key lawmakers to support a California AI bill as it enters the final stages of the state’s legislative process. In a letter shared exclusively with TIME, Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell argue that the next generation of AI systems pose “severe risks” if “developed without sufficient care and oversight,” and describe the bill as the “bare minimum for effective regulation of this technology.”
[time-brightcove not-tgx=”true”]The bill, titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced by Senator Scott Wiener in February of this year. It requires AI companies training large-scale models to conduct rigorous safety testing for potentially dangerous capabilities and implement comprehensive safety measures to mitigate risks.
“There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers,“ the four experts write.
The letter is addressed to the respective leaders of the legislative bodies the bill must pass through if it is to become law: Mike McGuire, the president pro tempore of California’s senate, where the bill passed in May; Robert Rivas, speaker of the state assembly, where the bill will face a vote later this month; and state Governor Gavin Newsom, who—if the bill passes in the assembly—must sign or veto the proposed legislation by the end of September.
With Congress gridlocked and Republicans pledging to reverse Biden’s AI executive order if elected in November, California—the world’s fifth-largest economy and home to many of the world’s leading AI developers—plays what the authors see as an “indispensable role” in regulating AI. If passed, the bill would apply to all companies operating in the state.
While polls suggest the bill is supported by the majority of Californians, it has been subject to harsh opposition from industry groups and tech investors, who claim it would stifle innovation, harm the open-source community, and “let China take the lead on AI development.” Venture capital firm Andreessen Horowitz has been particularly critical of the bill, setting up a website that urges citizens to write to the legislature in opposition. Others, such as startup incubator YCombinator, Meta’s Chief AI Scientist Yann LeCun, and Stanford professor Fei-Fei Li (whose new $1 billion startup has received funding from Andreessen Horowitz) have also been vocal in their opposition.
The pushback has centered around provisions in the bill which would compel developers to provide reasonable assurances that an AI model will not pose unreasonable risk of causing “critical harms,” such as aiding in the creation of weapons of mass destruction or causing severe damage to critical infrastructure. The bill would only apply to systems that both cost over $100 million dollars to train and are trained using an amount of computing power above a specified threshold. These dual requirements imply the bill would likely only affect the largest AI developers. “No currently existing system would be classified,” Lennart Heim, a researcher at the RAND Corporation’s Technology and Security Policy Center, told TIME in June.
“As some of the experts who understand these systems most, we can say confidently that these risks are probable and significant enough to make safety testing and common-sense precautions necessary,” the authors of the letter write. Bengio and Hinton, who have previously supported the bill, are both winners of the Turing Award, and often referred to as “godfathers of AI,” alongside Yann LeCun. Russell has written a textbook—Artificial Intelligence: A Modern Approach—that is widely considered to be the standard textbook on AI. And Lessig, a Professor of Law at Harvard, is broadly regarded as a founding figure of Internet law and a pioneer in the free culture movement, having founded the Creative Commons and authored influential books on copyright and technology law. In addition to the risks noted above, they cite risks posed by autonomous AI agents that could act without human oversight among their concerns.
Read More: Yoshua Bengio Is on the 2024 TIME100 List
“I worry that technology companies will not solve these significant risks on their own while locked in their race for market share and profit maximization. That’s why we need some rules for those who are at the frontier of this race,” Bengio told TIME over email.
The letter rejects the notion that the bill would hamper innovation, stating that as written, the bill only applies to the largest AI models; that large AI developers have already made voluntary commitments to undertake many of the safety measures outlined in the bill; and that similar regulations in Europe and China are in fact more restrictive than SB 1047. It also praises the bill for its “robust whistleblower protections” for AI lab employees who report safety concerns, which are increasingly seen as necessary given reports of reckless behavior on the part of some labs.
In an interview with Vox last month, Senator Wiener noted that the bill has already been amended in response to criticism from the open-source community. The current version exempts original developers from shutdown requirements once a model is no longer in their control, and limits their liability when others make significant modifications to their models, effectively treating significantly modified versions as new models. Despite this, some critics believe the bill would require open-source models to have a “kill switch.”
“Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation,” the letter says, noting that the bill does not have a licensing regime or require companies to receive permission from a government agency before training a model, and relies on self-assessments of risk. The authors further write: “It would be a historic mistake to strike out the basic measures of this bill.”
Over email, Lessig adds “Governor Newsom will have the opportunity to cement California as a national first-mover in regulating AI. Legislation in California would meet an urgent need. With a critical mass of the top AI firms based in California, there is no better place to take an early lead on regulating this emerging technology.”