California Governor Gavin Newsom has vetoed a landmark AI safety bill, citing concerns it could stifle innovation and drive tech companies out of the state. 🚫💻 Instead, he plans to collaborate with experts to develop more effective, science-based regulations for AI technology. 🤝✨ Read full article from TheVerge! 👇 #AISafety #GavinNewsom #TechRegulation #ArtificialIntelligence #AIEthics #TechIndustry #PolicyChange
Themis K.’s Post
More Relevant Posts
-
🌐 In a significant move for the future of AI, California Governor Gavin Newsom has vetoed Senate Bill 1047, marking a pivotal moment in the ongoing dialogue around AI safety and innovation. While the bill aimed to establish groundbreaking safety measures for large AI models, Newsom expressed concern that it concentrated narrowly on high-cost models, potentially overlooking critical risks posed by smaller systems. This decision sparks an important conversation: How do we ensure comprehensive AI regulation without stifling innovation? Newsom plans to collaborate with industry experts to craft more targeted, evidence-based regulations, showing a commitment to fostering both technological advancement and consumer protection. As one of the global nerve centers for AI, California's approach will likely influence standards far beyond its borders. How do you think AI regulations should evolve to effectively address both innovation and safety? Join the discussion below! #AI #Innovation #Regulation #TechPolicy #CaliforniaAI #FutureOfWork #EmergingTech
https://www.theverge.com/2024/9/29/24232172/california-ai-safety-bill-1047-vetoed-gavin-newsom
theverge.com
To view or add a comment, sign in
-
Governor Newsom's veto of SB 1047, California's landmark AI safety bill, signals a crucial moment in tech governance. While the bill aimed to set strict guardrails for AI development, its rejection highlights the complex balance between innovation and regulation. Key takeaways: - The veto emphasises the need for nuanced, risk-based approaches to AI oversight - It underscores concerns about stifling innovation in a rapidly evolving field - The decision keeps California competitive in the global AI race but raises questions about future safety measures This pause offers a chance to refine our approach to AI governance but it does beg the question: how can we craft regulations that protect public interest without hindering technological progress? Read more: https://lnkd.in/es9jvPwc #AIRegulation #TechPolicy #ResponsibleAI #CaliforniaTech
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
Governor Newsom’s decision to veto #SB1047 is a smart move, considering how new and fast-changing the AI landscape is. AI is evolving too quickly for blanket regulations like this, which could end up holding back progress more than helping. Instead of locking down the models themselves, it makes more sense to focus on building safety measures at the application layer and in the AI workflows. This approach is more flexible, allowing for innovation at the "kernel of AI" layer while still addressing public safety concerns. #SB1047 #AISafety #ResponsibleAI https://lnkd.in/gkSvJWi7
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
Governor Newsom vetoed California’s AI safety bill, SB 1047, stating, ‘I do not believe this is the best approach to protecting the public from real threats posed by the technology.’ Instead, he’s partnering with experts like Dr. Fei-Fei Li Stanford Institute for Human-Centered Artificial Intelligence (HAI) to advance safe, responsible AI. This raises a critical question: How do we balance AI innovation with safety? Should a national framework lead, or should states like California drive the regulation? Learn more about the veto https://lnkd.in/gsw9Fp9u And the initiatives https://lnkd.in/g6mP463Y Cc: The GenAI Collective
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
Latest on #AIRegulation: Governor Gavin Newsom had a deadline of September 30, 2024, to either sign or veto SB 1047, the controversial AI safety bill recently passed by the California State Assembly. The bill aimed to impose strict safety and transparency requirements on AI models, particularly those used by large corporations developing powerful AI systems. However, Governor Newsom has #vetoed the bill, expressing concerns about its narrow scope. In his statement, he explained that the legislation disproportionately targets the biggest and most expensive AI models while leaving smaller, yet potentially harmful, AI systems unregulated. Newsom emphasized the need for a more comprehensive approach to AI regulation that addresses the risks posed by AI technology across the board, not just from the largest companies. He also called for further collaboration between state and federal governments to create a balanced and effective regulatory framework for AI development and deployment. This decision has sparked mixed reactions. Advocates for AI regulation argue that SB 1047 would have been a crucial first step toward reigning in unchecked AI development. Critics, on the other hand, believe the bill’s selective focus would have stifled innovation and growth in California's tech industry, particularly for startups and smaller AI developers. With the veto, the future of AI regulation in California remains uncertain, and the debate over how to best govern AI's rapid advancements continues. Continue reading here [link 🔗 below]. ⬇️
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
Good move and a sensible decision, overall, by Governor Newsom, on vetoing SB 1047. The bill placed vague and overly broad regulatory burdens exclusively on large foundation models without properly considering their specific applications or deployment environments. #ai #ml #airegulation #technology #sb1047 https://lnkd.in/gPH2HUv5
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
It was a big week for #AISafety and #AIRegulation as Governor Gavin Newsom vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aka #SB1047. My quick take... Problems with the Bill: - Might have actually prevented innovation, learning, and progress toward #SafeAI. - Over-emphasized future hypothetical threats, ignoring current AI issues. - Somewhat arbitrary focus on just the largest frontier models while ignoring dangers in smaller, specialized models. - A special burden would have been placed on AI companies in California only, threatening California competitiveness. - Downstream liability for AI producers might have been fatal for open source AI. What’s Next: - The conversation should help spur Federal regulation on AI safety – IF Washington can keep pace in understanding current technology. - Expect the AI safety conversation internationally to gain momentum. - In California, AI pioneer and superstar Dr. Fei-Fei Li and others will lead the state's future efforts to develop AI guardrails. TL;DR: We are just at the top of the first inning on AI regulation, so stay tuned! https://lnkd.in/dE9chsWi
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
S.B. 1047, a bill proposed in California to regulate the largest AI frontier models, just got vetoed. Some thoughts: I don't think this bill would have hindered AI innovation, quite the contrary. The scope is models that exceed 100M$ to train, with that kind of money you can spend some level of effort on safety testing and documentation around that - even if your model is open-source - to some degree. I do agree that as frontier models are half-products, you can't necessarily held providers liable for others who built systems with this either with ill intent, or without the right best practices. In that sense the liability could have been more limited, and in principle as general-purpose AI such as foundation models can be used for good or bad, the focus of AI legislation should be on specific AI systems and applications built with these building blocks, just as in the EU AI Act for instance. The unnacceptable or high risk systems in the EU AI Act are at the level of for purpose AI systems, large foundation models fall more in the limited risk category, where there are primarily transparancy requirements. If anything, this proposal is preparing the US market for the idea there could be some level AI regulation at some point as the AI market matures, at state or federal level. #ai #airegulation #aiethics #responsibleai #trustworthyai #SB1047
Gavin Newsom vetoes sweeping AI safety bill, siding with Silicon Valley
politico.com
To view or add a comment, sign in
-
California’s AI Safety Bill Vetoed: Why This Matters Governor Gavin Newsom recently vetoed SB 1047, a bill that would have set some of the most stringent AI safety regulations in the US. The bill aimed to protect against potential harms caused by AI, such as misuse in critical infrastructures, but it also raised concerns that it would stifle innovation by imposing high barriers, even for basic AI models. Personally, I support this decision. Like many in the field, I believe the focus should be on regulating applications of AI, not restricting the technology itself. What we should aim for is a set of flexible, science-driven guardrails that evolve as AI does, allowing for innovation while mitigating risks. Newsom’s move to work with experts like Fei-Fei Li and tech companies on developing responsible frameworks is a positive step. Rather than enacting broad and potentially limiting legislation, we should focus on a regulatory approach based on real-world applications and evidence.
California Governor Vetoes Bill to Create First-in-Nation AI Safety Measures
securityweek.com
To view or add a comment, sign in
-
In a move shaking up the tech world, California Governor Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). His reasoning? The bill might be too broad—like trying to use a sledgehammer to crack a nut. While the bill aimed to impose regulations to rein in AI, Newsom warned that it could create a false sense of security and inadvertently stifle the innovation that California is known for. He also pointed out that smaller, specialized models could carry risks similar to those of larger systems. Meanwhile, supporters of the bill see this veto as a setback for oversight and public safety, igniting a vital debate: How do we keep AI in check without putting the brakes on the progress we all value? Read more: https://lnkd.in/es9jvPwc
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in