Good news for #AI #innovation, and especially for developers, builders, and the technology community at large. Thank You Gavin Newsom and the Office of the Governor - California for a measured and thoughtful veto of the #SB1047 amendment. While no one disputes the need for #regulation in #AI, #California in particular has both an #opportunity and #responsibility to assume leadership. A deeper understanding of the technical ways in which AI #data is #retrieved, #processed, and #presented is key to an #informed regulation. Unlike traditional #relational #databases where the data you retrieve is the same as the data you store with #tables and #indexed data lookups, AI and #GenerativeAI is fundamentally different. With #vector #databases and #neuralnetworks, #LLMs can create data on the fly. This calls for qualified input from #academic #research, #developers, and #technology #providers to inform policy on #guardrails and #practical #industry implementations. Join the vetted community at Llama Labs, or follow me for more on this, and AI technology full-stack conversations, deepdives, resources, and events
Murtuza (Matt) Haryanawalla’s Post
More Relevant Posts
-
As a hot topic, you can imagine there are strong opinions on both sides of the spectrum. Supporters of the veto believe what was proposed went too far and would completely hamstring AI progress and innovation while grossly-restricting access to the technology. Opponents of the veto believe this bill barely scratched the surface in limiting tech companies’ excessive power and influence with the veto resulting in inevitable harm to humanity from non-governed AI development. So, who’s right? Neither. As with everything, both sides have legitimate points and concerns that should be taken into consideration. Both sides are attempting to validate their position by comparing their steel-man arguments to their opponents straw-man arguments. Both sides are also approaching the situation as a zero-sum game, resulting in an unwillingness to compromise. For the sake of everyone, we can do better. Perhaps, instead of choosing a polar end of the spectrum and doubling down on a no-holds-bar competition for victory, both parties can work, with reasonable comprompromises, toward a mutually beneficial solution that puts reasonable governance and oversight on what we already know to be the most powerful technology in human history while continuing innovation and democratizing it’s capability and access for the benefit of humankind. #ai #governance #technology
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
Latest on #AIRegulation: Governor Gavin Newsom had a deadline of September 30, 2024, to either sign or veto SB 1047, the controversial AI safety bill recently passed by the California State Assembly. The bill aimed to impose strict safety and transparency requirements on AI models, particularly those used by large corporations developing powerful AI systems. However, Governor Newsom has #vetoed the bill, expressing concerns about its narrow scope. In his statement, he explained that the legislation disproportionately targets the biggest and most expensive AI models while leaving smaller, yet potentially harmful, AI systems unregulated. Newsom emphasized the need for a more comprehensive approach to AI regulation that addresses the risks posed by AI technology across the board, not just from the largest companies. He also called for further collaboration between state and federal governments to create a balanced and effective regulatory framework for AI development and deployment. This decision has sparked mixed reactions. Advocates for AI regulation argue that SB 1047 would have been a crucial first step toward reigning in unchecked AI development. Critics, on the other hand, believe the bill’s selective focus would have stifled innovation and growth in California's tech industry, particularly for startups and smaller AI developers. With the veto, the future of AI regulation in California remains uncertain, and the debate over how to best govern AI's rapid advancements continues. Continue reading here [link 🔗 below]. ⬇️
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
California Governor Gavin Newsom vetoed a significant AI safety bill (SB 1047), citing concerns over its potential to stifle innovation and burden companies without effectively addressing AI risks. 🛑 Governor Newsom's Veto: He argued the bill imposes stringent standards even on basic AI systems, risking unnecessary constraints on innovation. ⚠️ Public Security Concerns: Newsom believes the bill might give a false sense of security, as it doesn’t consider AI's deployment in high-risk environments or the complexity of smaller models. 💼 Corporate Pushback: AI companies, including Meta and OpenAI, opposed the bill, warning it could hamper technological progress, with some calling for federal regulation instead. 🎭 Hollywood's Support: Celebrities like Mark Hamill and unions such as SAG-AFTRA supported the bill, seeing it as crucial for public safety and oversight of powerful AI corporations. 🏛️ Federal AI Regulation: The federal government is also exploring ways to regulate AI, signaling broader concern about its impact on society and national security. #AIRegulation #Innovation #TechPolicy 🧠 Governor Newsom's decision aligns with his view that more nuanced and targeted regulations are needed to address AI risks effectively. 📉 Critics worry this veto leaves AI companies largely unregulated, raising concerns about unchecked power and potential harm from advancing technologies. 🇺🇸 Federal policymakers are stepping up discussions, with the Senate proposing a roadmap to address AI's influence on security, elections, and more. ♻️ Repost if you enjoyed this post, and follow me César Beltrán Miralles for more curated content about generative AI! https://lnkd.in/g2MyXvTK
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
Governor Newsom's veto of SB 1047, California's landmark AI safety bill, signals a crucial moment in tech governance. While the bill aimed to set strict guardrails for AI development, its rejection highlights the complex balance between innovation and regulation. Key takeaways: - The veto emphasises the need for nuanced, risk-based approaches to AI oversight - It underscores concerns about stifling innovation in a rapidly evolving field - The decision keeps California competitive in the global AI race but raises questions about future safety measures This pause offers a chance to refine our approach to AI governance but it does beg the question: how can we craft regulations that protect public interest without hindering technological progress? Read more: https://lnkd.in/es9jvPwc #AIRegulation #TechPolicy #ResponsibleAI #CaliforniaTech
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
California Governor Gavin Newsom has vetoed a landmark AI safety bill, citing concerns it could stifle innovation and drive tech companies out of the state. 🚫💻 Instead, he plans to collaborate with experts to develop more effective, science-based regulations for AI technology. 🤝✨ Read full article from TheVerge! 👇 #AISafety #GavinNewsom #TechRegulation #ArtificialIntelligence #AIEthics #TechIndustry #PolicyChange
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
In a move shaking up the tech world, California Governor Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). His reasoning? The bill might be too broad—like trying to use a sledgehammer to crack a nut. While the bill aimed to impose regulations to rein in AI, Newsom warned that it could create a false sense of security and inadvertently stifle the innovation that California is known for. He also pointed out that smaller, specialized models could carry risks similar to those of larger systems. Meanwhile, supporters of the bill see this veto as a setback for oversight and public safety, igniting a vital debate: How do we keep AI in check without putting the brakes on the progress we all value? Read more: https://lnkd.in/es9jvPwc
California governor vetoes major AI safety bill
theverge.com
To view or add a comment, sign in
-
S.B. 1047, a bill proposed in California to regulate the largest AI frontier models, just got vetoed. Some thoughts: I don't think this bill would have hindered AI innovation, quite the contrary. The scope is models that exceed 100M$ to train, with that kind of money you can spend some level of effort on safety testing and documentation around that - even if your model is open-source - to some degree. I do agree that as frontier models are half-products, you can't necessarily held providers liable for others who built systems with this either with ill intent, or without the right best practices. In that sense the liability could have been more limited, and in principle as general-purpose AI such as foundation models can be used for good or bad, the focus of AI legislation should be on specific AI systems and applications built with these building blocks, just as in the EU AI Act for instance. The unnacceptable or high risk systems in the EU AI Act are at the level of for purpose AI systems, large foundation models fall more in the limited risk category, where there are primarily transparancy requirements. If anything, this proposal is preparing the US market for the idea there could be some level AI regulation at some point as the AI market matures, at state or federal level. #ai #airegulation #aiethics #responsibleai #trustworthyai #SB1047
Gavin Newsom vetoes sweeping AI safety bill, siding with Silicon Valley
politico.com
To view or add a comment, sign in
-
📢 Both houses of the California legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), also known as the AI Safety Bill, to regulate and ensure the safe use of AI technology. What does this mean for the industry? The introduction of SB 1047 aims to align human values and rights with technological advancement. Although the law is not yet effective, it can significantly impact federal policy, particularly for #legaltech companies housed in California.📝 Key provisions of SB 1047: 🔹Pre-launch testing: Developers of advanced AI systems must conduct safety tests to detect any harmful capabilities pre-deployment. 🔹Prevention of misuse: Cybersecurity protection and continuous monitoring occur post-launch to prevent misuse such as cyberattacks or the development of dangerous weapons. 🔹Employee protection: It protects employees from AI labs against safety violations and unethical practices. 🔹A fairer system: CalCompute, a public cloud computing platform, will assist startups and researchers in developing AI responsibility, ensuring that innovation isn’t limited to large companies. Read the full article here 👉https://lnkd.in/gs6AJGTu More than ever, this is the time for legaltech professionals to equip themselves with the knowledge and tools to navigate through these unprecedented times. ▶️ That is why #LIC is here to help! Our event features panel discussions from experts who are building resilience for the industry and providing critical insights to propel your business forward. Pre-register for the 2025 edition of Legal Innovators California here: 📌 https://lnkd.in/ge5UjWH9 #LegalInnovatorsCalifornia #LegalTech #Innovation #Networking #LegalProfession
A Look at California’s Sweeping AI Safety Bill | TechPolicy.Press
techpolicy.press
To view or add a comment, sign in
-
AI (artificial intelligence) has been making waves, and local governments are grappling with how to navigate its complexities. 🤖💡 🏙️ The city of Grove City, Ohio, is leading the way by creating a model for other local governments. Here’s what we can learn from their approach: 1. Proactive Governance: Grove City’s strategy is grounded in proactive governance and risk mitigation. Before diving into AI policy, they revamped their overall policy management practices. Ensuring that employees understand acceptable conduct and performance from day one is crucial. 2. Policy Foundation: Grove City adopted a policy management system used by their police department. They transferred existing policies into the cloud and established new ones. This move enhanced professionalism and compliance across the board. 3. Nuanced AI Policy: Rather than outright banning AI, Grove City is creating a nuanced AI policy. They recognize that AI search can expose sensitive information, but they’re focused on safe and appropriate use. Their model sets the groundwork for other local governments. 4. Real-Time Communication: When policies evolve due to emerging technologies, Grove City communicates with employees in real time. This ensures everyone understands the implications of new policies. #CGIAdvantage has these solutions already, reach out to us. #AI #LocalGovernment #Policy #TechLeadership #CGI
FOLLOW American City and County ON SOCIAL
https://www.americancityandcounty.com
To view or add a comment, sign in
-
California’s AI Safety Bill Vetoed: Why This Matters Governor Gavin Newsom recently vetoed SB 1047, a bill that would have set some of the most stringent AI safety regulations in the US. The bill aimed to protect against potential harms caused by AI, such as misuse in critical infrastructures, but it also raised concerns that it would stifle innovation by imposing high barriers, even for basic AI models. Personally, I support this decision. Like many in the field, I believe the focus should be on regulating applications of AI, not restricting the technology itself. What we should aim for is a set of flexible, science-driven guardrails that evolve as AI does, allowing for innovation while mitigating risks. Newsom’s move to work with experts like Fei-Fei Li and tech companies on developing responsible frameworks is a positive step. Rather than enacting broad and potentially limiting legislation, we should focus on a regulatory approach based on real-world applications and evidence.
California Governor Vetoes Bill to Create First-in-Nation AI Safety Measures
securityweek.com
To view or add a comment, sign in