Newsom's AI Veto: California's Governor Hits Pause on 'Frontier' Tech Regulation Governor Gavin Newsom has vetoed Senate Bill 1047, which aimed to regulate large-scale AI models in California. Here's what you need to know: 1. The bill would have required developers of large AI models to implement safeguards against potential catastrophic harm. 2. It proposed establishing a state Board of Frontier Models to oversee AI development. 3. Newsom acknowledges the need for AI regulation but believes SB 1047 is not the right approach. Key reasons for the veto: - The bill focuses solely on large, expensive models, potentially overlooking risks from smaller, specialized AI systems. - It doesn't consider whether an AI system is deployed in high-risk environments or involves critical decision-making. - The governor believes the regulatory framework should be more adaptable to keep pace with rapidly evolving AI technology. Newsom's stance: - Affirms California's role in regulating AI, including potential national security implications. - Supports a California-specific approach if necessary, but emphasizes the need for empirical evidence and scientific backing. - Highlights ongoing efforts, including risk analyses of AI threats to critical infrastructure. Moving forward: - The governor commits to working with the Legislature, federal partners, tech experts, ethicists, and academia to develop appropriate AI regulations. - He stresses the importance of balancing protection against real threats with fostering innovation for public good. Newsom's message: "We cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility." What's next? Expect continued discussions and potential new legislative proposals aimed at creating a more flexible and comprehensive AI regulatory framework in California. #AI #ArtificialIntelligence #AIregulation #SztucznaInteligencja #SI #law #California
Robert Nogacki’s Post
More Relevant Posts
-
Congress needs to continue a step-by-step approach to artificial intelligence, focusing on policy gaps, incentives, addressing actual harms and pre-empting state regulatory moves, according to Paul Lekas of the Software & Information Industry Association. “I applaud Congress for getting up to speed [on AI] over the last two years, they are much better equipped than they get credit for,” Lekas told Inside AI Policy last week. “But Congress can’t do everything and shouldn’t do everything. They’re figuring out where they should step in.” Lawmakers should “look at gaps in the law, create incentives and disincentives, and do all they can to advance innovation,” according to Lekas. READ FULL STORY: #AIregulation
To view or add a comment, sign in
-
California's Governor General has vetoed SB 1047 - the bill that would have regulated the development of AI within the state. The bill, which was authored by State Senator Scott Wiener, would have made companies that develop AI models liable for implementing safety protocols to prevent critical harms. The rules would only have applied to models that cost at least $100 million and use large amounts of computation during training. The reason stated for vetoing the bill was "...𝑡ℎ𝑒 𝑏𝑖𝑙𝑙 𝑤𝑜𝑢𝑙𝑑 ℎ𝑎𝑣𝑒 𝑠𝑡𝑖𝑓𝑙𝑒𝑑 𝐴𝐼 𝑖𝑛𝑛𝑜𝑣𝑎𝑡𝑖𝑜𝑛, 𝑝𝑢𝑡𝑡𝑖𝑛𝑔 𝐶𝑎𝑙𝑖𝑓𝑜𝑟𝑛𝑖𝑎’𝑠 𝑝𝑙𝑎𝑐𝑒 𝑎𝑠 𝑡ℎ𝑒 𝑔𝑙𝑜𝑏𝑎𝑙 ℎ𝑢𝑏 𝑜𝑓 𝑖𝑛𝑛𝑜𝑣𝑎𝑡𝑖𝑜𝑛 𝑎𝑡 𝑡𝑟𝑒𝑚𝑒𝑛𝑑𝑜𝑢𝑠 𝑟𝑖𝑠𝑘." : https://lnkd.in/gUJsA2Fk This is a really disappointing outcome because the bill was widey opposed in Silicon Valley and it's disappointing to see that influence carry over to regulatory decisions that are supposed to be in the best interest of the public. You can read more about the bill and its recent ruling here: https://lnkd.in/gMKnzKPG . . . . #AIRegulation #CaliforniaPolitics #SB1047 #Innovation #PublicSafety #TechPolicy #AIDevelopment #SiliconValley #Governance #EthicalAI #RegulatoryImpact #TechIndustry #GovernorVeto #PublicInterest #FutureOfAI
Gov. Newsom vetoes California’s controversial AI bill, SB 1047
yahoo.com
To view or add a comment, sign in
-
Newsom [finally] vetoes S.1047 As things were seemingly trending the last few weeks, California Gov. Gavin Newsom vetoed S.1047 (aimed at imposing safety vetting requirements on large AI models) siding with Silicon Valley and the majority of the CA congressional delegation. While this veto will undoubtedly grab headlines in the coming weeks, it's unlikely to shift the substantive conversation around AI regulation in DC as federal AI discussions are already evolving, driven by broader geopolitical and economic concerns and seemingly unconcerned with the pace of state law...still, the veto is significant in the context of other states, as it could influence how state legislatures approach AI regulation, particularly as momentum builds around the country to fill the gap left by federal inaction. The bill, by state Sen. Wiener, sought to require developers of certain AI systems to certify safety testing before deployment, addressing risks like bioweapons creation. Newsom, however, argued that the legislation's blanket application of strict standards to all AI systems—regardless of their risk level or context—was too broad and that such an approach would unnecessarily burden even basic AI functions, ultimately stifling innovation without effectively protecting the public from actual AI threats. The veto followed intense debate from major players in CA's tech industry, including opposition from Google, OpenAI, and various VC firms. Prominent figures like Elon Musk and several AI researchers supported the bill, seeing it as a way to mitigate public risk, but many in Silicon Valley—including political leaders like former Speaker Pelosi and Reps. Lofgren and Khanna—feared it would hamper the state's economic competitiveness. Newsom's decision also underscores CA's complex balancing act between maintaining its leadership in #tech #innovation and fostering responsible #regulation. While he vetoed the broader #AI bill, he signed a more modest state emergency response bill to study AI risks and committed to working with experts like Stanford's Dr. Fei-Fei Li to develop more targeted #legislation in 2025. Newsom has also pledged to collaborate with #labor and #business to expand #workplace AI applications. As other states increasingly tackle AI regulation, the broader implications of this #veto could shape federal legislation in the future, as momentum builds for a national framework to address AI risks and opportunities across various sectors. Monument Advocacy
To view or add a comment, sign in
-
Yesterday Colorado's legislature passed a first-of-its-kind Artificial Intelligence bill, The Colorado Artificial Intelligence Act (SB 205)! I think CO's approach is very practical and finds a middle ground between soft-touch guidance (e.g., #CaliforniaAI) and over-written, over-broad regulations that would be impossible to enforce. Instead, CO's bill focuses on a specific high-risk consumer issue, algorithmic discrimination, and lays out specific requirements and enforcement procedures for the AI's development and use. CO is also considering another bill (HB 1468), that would create a multi-disciplinary working group responsible for evaluating the law's effectiveness. While it's not perfect, it is promising to see how agile and open minded legislators where when drafting and reviewing the bill. SB 205 began with a much wider scope, but was narrowed to address a specific issue, rather than broad harms. It's promising to see states beginning to draft *and pass* incremental AI regulation rather than trying to boil the ocean with drafts that won't make it out of committee. #COAI #artificialintelligence #algorithmicdiscrimination #airegulation https://lnkd.in/e97Hg3aD
Colorado House Approves Bill Restricting Private Sector AI (1)
news.bloomberglaw.com
To view or add a comment, sign in
-
If you are interested in AI, you want to keep your finger on the pulse of all things coming out of the Federal Government (States too) regarding regulation. It's a fine balance between protection and innovation and those of us in this space need to advocate for innovation and understand the requirements for regulatory protections. If you are in government contract work, you should understand the new innovation funding streams coming out to further this work. It is going to be an exciting ride.
US Lawmakers Seek $32 Billion to Keep American AI Ahead of China
usnews.com
To view or add a comment, sign in
-
🌟 Exciting News! 🌟 📢 Colorado becomes the first state in the country to create a regulatory framework for artificial intelligence! 🎉 [3] 🤝 Gov. Jared Polis has just signed Senate Bill 24-205 into law, marking a significant milestone in the world of AI. This groundbreaking legislation establishes new individual rights and protections for high-risk artificial intelligence systems. 🚀 🌐 As the Director of Innovation, I am thrilled to see Colorado taking the lead in shaping the future of AI regulation. This move sets a precedent for other states and countries to follow, ensuring that AI technologies are developed and deployed responsibly. 💡 🌍 Developing a business strategy is crucial in today's world, and AI plays a pivotal role in shaping the future of businesses. With the introduction of comprehensive AI regulations, companies can now navigate the AI landscape with confidence, knowing that ethical considerations and individual rights are at the forefront. 🌱 📚 Just recently, Alberta announced that it is also working on its first regulations governing artificial intelligence. [1] This demonstrates the growing recognition of the need for AI regulations worldwide. It's an exciting time for the AI industry as we witness the convergence of innovation and responsible governance. 🌟 🌄 As we celebrate this milestone, let's reflect on the immense potential of AI to transform industries and improve lives. By embracing AI regulations, we can harness the power of this technology while ensuring it is used ethically and responsibly. Together, we can shape a future where AI benefits all of humanity. 🌈 🗣️ I would love to hear your thoughts on this groundbreaking development! How do you think AI regulations will impact businesses and society as a whole? Share your insights in the comments below! Let's start a conversation and drive positive change together. 💬 #AIRegulations #EthicalAI #ResponsibleInnovation #FutureOfAI #BusinessStrategy References: [1] Developing a business strategy: https://lnkd.in/dPqrxUed [2] Alberta working on its first regulations governing artificial intelligence: https://lnkd.in/d8u3W9Hc [3] Colorado Enacts First Comprehensive U.S. Law Governing Artificial Intelligence Systems: https://lnkd.in/d38ERk5v
Colorado becomes first state with sweeping artificial intelligence regulations • Colorado Newsline
https://coloradonewsline.com
To view or add a comment, sign in
-
"SB-1047: Governor Gavin Newsom’s Ambivalent Approach To AI Regulations" California's status as a global economic leader is bolstered by Silicon Valley's tech innovation, with influential companies like Apple, Alphabet, and Meta calling it home. The state is now at a critical juncture in the realm of artificial intelligence (AI) with the introduction of Senate Bill 1047 (SB-1047), which aims to impose stringent regulations on AI safety. The bill, passed in August 2024, initially required rigorous safety tests, third-party evaluations, and severe penalties for non-compliance, but has since been revised to be more flexible. It now focuses on public safety statements rather than a dedicated regulatory agency and exempts smaller entities from some requirements. Governor Gavin Newsom faces a challenging decision on whether to sign SB-1047 into law, balancing pressure from advocacy groups and Democratic supporters with significant opposition from Silicon Valley tech executives. The bill's potential impact on AI regulation represents a significant shift, and its finalization by September 30, 2024, could set a precedent for managing AI innovation and safety. The outcome will reveal whether California can harmonize the demands of innovation with regulatory oversight. Read the full article by Amartya Mody: https://lnkd.in/e8d3JWwX #LegalArticles #Articles #CaliforniaAI #SB1047 #TechRegulation #SiliconValley #AIInnovation #GovernorNewsom #AIRegulation #TechPolicy #ArtificialIntelligence #Legislation2024 #FutureOfAI #NaikNaikAndCo #NNICO
SB-1047: Governor Gavin Newsom’s Ambivalent Approach To AI Regulations
https://naiknaik.com
To view or add a comment, sign in
-
🏛️ No, this isn't a civics lesson, but... You might just have missed a big story coming out of Colorado on where US AI regulation policy is headed next. 🚀 Colorado's AI Act is far from just any piece of legislation. It's a complete tone shift in how states like CO are tackling thorny AI challenges. Here's what you need to know: • High-Risk AI Systems: The Act specifically targets systems that make consequential decisions affecting areas like education, employment, financial services, healthcare, housing, and legal services. This means any AI system significantly impacting these areas must now adhere to strict guidelines to prevent unfair treatment based on protected classifications such as age, race, and gender. • Developer and Deployer Obligations: The Act mandates that developers and deployers of high-risk AI systems maintain detailed documentation, publicly disclose how they manage risks and implement comprehensive risk management policies. This ensures transparency and accountability throughout the AI lifecycle. • Consumer Rights: The language of the Act guarantees transparency, the right to correct data, and the right to appeal AI-driven decisions. This will greatly empower consumers to have more control and clarity over how AI impacts their lives. • Enforcement: The Colorado Attorney General now has exclusive authority to enforce this law, ensuring rigorous compliance. Signed into law last Friday, this bill is one of the most robust public responses to the growing influence of AI in our daily lives and in critical sectors. 🗳️ Having navigated the legislative maze before, you just have to appreciate the sheer effort it takes to pass a groundbreaking bill. Read the summary: https://lnkd.in/grrWvx8p 🤔 Where do you think AI governance is headed next in the US? –Noah #PublicInnovation #SB205 #AIGovernance #AISafety ––– 🌐 Welcome to Aurix, a community of next-gen leaders transforming their lives and working with technology. 📢 Visit our website and join our community: https://www.aurixai.org/ ♻️ Share if you found this interesting!
Two unlikely states are leading the charge on regulating AI
politico.com
To view or add a comment, sign in
-
𝗨.𝗦. 𝗦𝘁𝗿𝘂𝗴𝗴𝗹𝗲𝘀 𝗳𝗼𝗿 𝗨𝗻𝗶𝗳𝗶𝗲𝗱 𝗔𝗜 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗔𝗺𝗶𝗱 𝗦𝘁𝗮𝘁𝗲-𝗟𝗲𝘃𝗲𝗹 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝘀 𝗮𝗻𝗱 𝗜𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗣𝘂𝘀𝗵𝗯𝗮𝗰𝗸 U.S. lawmakers are wrestling with the challenge of regulating AI, with progress marked by both breakthroughs and setbacks. While states like Tennessee, Colorado, and California have advanced state-level regulations—ranging from voice cloning protections to risk-based approaches—comprehensive federal policy remains elusive. California’s SB 1047, a proposed AI safety bill, was vetoed after industry pushback, yet federal bodies like the FTC and FCC are increasingly applying existing regulations to rein in unchecked AI practices. President Biden’s AI Executive Order also established the U.S. AI Safety Institute (AISI) to assess AI risks, though the institute’s permanence depends on legislative support. Experts, including UC Berkeley’s Jessica Newman, suggest that without federal oversight, a confusing patchwork of state rules could emerge, complicating compliance for AI firms nationwide. With nearly 700 AI-related bills introduced at the state level this year alone, the push for unified federal regulation is growing, even as high-profile tech figures resist. California Senator Scott Wiener, a strong advocate for AI regulation, believes that recent efforts are building momentum, setting the stage for future legislative successes to address AI’s complex risks. For more news and updates, visit: //www.byteadvisory.com/blog/ or subscribe to our newsletter. #2024Election #SocialMediaRegulation #ContentModeration #ElectionDisinfo #USPolitics #ByteAdvisory
To view or add a comment, sign in
More from this author
-
NEW RESEARCH Just like our bodies have immune systems to fight diseases, our minds might have "immune systems" to defend against harmful information
Robert Nogacki 24m -
The Micro-State Dilemma: When is a Country Too Small to be a Country?
Robert Nogacki 1d -
Critical Notes on "We Looked at 78 Election Deepfakes. Political Misinformation Is Not an AI Problem" by Sayash Kapoor & Arvind Narayanan
Robert Nogacki 1d