Dark Mode Light Mode

Why California’s new AI security legislation succeeded the place SB 1047 failed

Why California's new AI safety law succeeded where SB 1047 failed Why California's new AI safety law succeeded where SB 1047 failed

California just made history as the first state to require AI safety transparency from the biggest labs in the industry. Governor Newsom signed SB 53 into law this week, mandating that AI giants like OpenAI and Anthropic disclose, and stick to, their safety protocols. The decision is already sparking debate about whether other states will follow suit. 

Adam Billen, vice president of public policy at Encode AI, joined Equity to break down what California’s new AI transparency law actually means — from whistleblower protections to safety incident reporting requirements. He also explains why SB 53 succeeded where SB 1047 failed, what “transparency without liability” looks like in practice, and what’s still on Governor Newsom’s desk, including rules for AI companion chatbots.

Equity is TechCrunch’s flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday.   

Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. 



Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Whizz co-founder says Trump's Chicago crackdown is scaring delivery workers off the streets

Whizz co-founder says Trump's Chicago crackdown is scaring supply employees off the streets

Next Post

Exploring the Interconnection of Safety, Well-being, and Innovation 🚀