The California State Assembly has passed landmark regulation to curb the potential threat of AI. California lawmakers on Wednesday passed the bill – the first of its kind in the US – pending a vote in the state Senate. Governor Gavin Newsom will now have a month to decide whether or not to approve it.
As per the bill, AI companies based in the Golden State will be required to implement a suite of precautionary measures before training AI models. It has attracted its share of controversy, with critics concerned that it will unduly impact smaller startups and stifle innovation.
AI continues to be a flashpoint for debate in the industry and beyond. Advocates worry that this latest regulation will stymie progress, whereas skeptics fear that the rapidly growing technology will lead to sweeping job cuts – and even become ungovernable.
Cali Lawmakers Issue AI Caution
California state lawmakers on Wednesday passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), sweeping reform to limit the potential threat caused by AI. It was originally introduced by State Senator Scott Wiener.
The bill stipulates that AI companies based in the state must adopt strict precautionary measures ahead of training their models. Among these, it must be possible to easily shut the model down, protect it from “unsafe post-training modification,” and follow a strict testing procedure to evaluate its threat level, reports Reuters.
This just in! View
the top business tech deals for 2024 👨💻
State senators will now vote on the legislation, which is expected to pass, before it reaches Governor Gavin Newsom. He will have until the end of September to decide how to proceed.
Bill Shockwaves Felt Across Industry
Unsurprisingly, this legislation has proven unpopular in some quarters. Last week, OpenAI Chief Strategy Officer Jason Kwon claimed in a letter to California State Senator Scott Wiener – the author of the bill – that matters of AI regulation should be left to the federal government, as reported by Bloomberg. In response, Wiener commented:
“SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk.”
State senators did bow to the public pressure – which included opposition from former House Speaker Nancy Pelosi – and amended the bill to replace criminal penalties with civil ones. Still, it is unlikely the news of its passing will be met with jubilation in Silicon Valley.
Critics argue that this regulation puts smaller AI startups under unnecessary pressure. By zeroing in the potentially catastrophic impacts of the technology, fledgling companies are subjected to rules that could prove harmful to their operations. In addition, the potential of larger companies could potentially be stifled.
AI Question Continues
In what is increasingly becoming one of the most divisive issues of our time, the AI debate rumbles on. Stoked by controversial figures like Elon Musk – who is actually a supporter of the bill – the debate surrounding AI has reached epidemic proportions.
As reported by Tech.co this week, the X AI chatbot, Grok, was found to be spreading misinformation about the upcoming US Presidential election. The company has urged users to get their election information from reputable sources, in response.
Elsewhere, it was reported on Wednesday that OpenAI is poised to close a new funding round that would put their valuation north of $100 billion – the same day that the California state bill passed the general assembly. As scandals mount and successes surge, the AI picture becomes harder and harder to make out. But there’s one way that we can say for sure – this story isn’t done yet.