California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic

Date:

Share post:


California’s bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. Today, California lawmakers bent slightly to that pressure, adding in several amendments suggested by AI firm Anthropic and other opponents.

On Thursday the bill passed through California’s Appropriations Committee, a major step towards becoming law, with several key changes, Senator Wiener’s office tells TechCrunch.

SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California’s government less power to hold AI labs to account.

What does SB 1047 do now?

Most notably, the bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic.

Instead, California’s attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

Further, SB 1047 no longer creates the Frontier Model Division (FMD), a new government agency formerly included in the bill. However, the bill still creates the Board of Frontier Models – the core of the FMD – and places them inside the existing Government Operations Agency. In fact, the board is bigger now, with nine people instead of five. The Board of Frontier Models will still set compute thresholds for covered models, issue safety guidance, and issue regulations for auditors.

Senator Wiener also amended SB 1047 so that AI labs no longer need to submit certifications of safety test results “under penalty of perjury.” Now, these AI labs are simply required to submit public “statements” outling their safety practices, but the bill no longer imposes any criminal liability.

SB 1047 also now includes more lenient language around how developers ensure AI models are safe. Now, the bill requires developers to provide “reasonable care” AI models do not pose a significant risk of causing catastrophe, instead of the “reasonable assurance” the bill required before.

Further, lawmakers added in a protection for open-source fine tuned models. If someone spends less than $10 million fine tuning a covered model, they are explicitly not considered a developer by SB 1047. The responsibility will still on the original, larger developer of the model.

Why all the changes now?

While the bill has faced significant opposition from U.S. congressmen, renowned AI researchers, Big Tech, and venture capitalists, the bill has flown through California’s legislature with relative ease. These amendments are likely to appease SB 1047 opponents and present Governor Newsom with a less controversial bill he can sign into law without losing support from the AI industry.

While Newsom has not publicly commented on SB 1047, he’s previously indicated his commitment to California’s AI innovation.

That said, these changes are unlikely to appease staunch critics of SB 1047. While the bill is notably weaker than before these amendments, SB 1047 still holds developers liable for the dangers of their AI models. That core fact about SB 1047 is not universally supported, and these amendments do little to address it.

What’s next?

SB 1047 is now headed to California’s Assembly floor for a final vote. If it passes there, it will need to be referred back to California’s Senate for a vote due to these latest amendments. If it passes both, it will head to Governor Newsom’s desk, where it could be vetoed or signed into law.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit (updated)

Lawyers for The New York Times and Daily News, which are suing OpenAI for allegedly scraping their...

Sequoia marks up its 2020 fund by 25%

Sequoia says no exits, no problem. The Silicon Valley titan of venture marked up the value of its...

Illumen Capital doubles down on supporting underrepresented funds

Illumen Capital is doubling down on its support for fund managers and founders from underrepresented communities.  The firm...

Gilroy, former Coatue fintech head, and angel investor Rajaram launch VC firm

Michael Gilroy, a former head of fintech investments at Coatue, and Gokul Rajaram, a longtime tech executive...

OpenAI is funding research into ‘AI morality’

OpenAI is funding academic research into algorithms that can predict humans’ moral judgements. In a filing with the...

Y Combinator often backs startups that duplicate other YC companies, data shows — it’s not just AI code editors

The Silicon Valley dream is to build a tech startup that is such a unique idea it...

Hyundai and Kia recall 208,000 EVs

Hyundai, Kia, and Genesis are recalling about 208,000 EVs in the United States due to an issue...

Money for tech that matters

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. If you’d like to receive this...