Narrative, shaping AI technologies similarly to nuclear ones (all these references to Manhattan like project and so on) implicitly creates an impression of something that could be contained and handled by group of bright minds, successfully taking care of using it for the good of humanity and avoiding any harm from it. However, IMO, just one illustration of how easily AI could produce complexity outside of the level of complexity approachable by top notch specialists could brake this utopian picture: 'With the 37th move in the match's second game, AlphaGo landed a surprise on the right-hand side of the 19-by-19 board that flummoxed even the world's best Go players, including Lee Sedol. "That's a very strange move," said one commentator, himself a nine dan Go player, the highest rank there is. "I thought it was a mistake," said the other.' AI models have so fat-tailed possible outcomes distribution, that looks like the safest way of applying of this technology is one provided by open source model, when tens of millions of engineers will be working with tens of thousands different models tuned from tens of different foundational models. Not tens of thousands in selected laboratories, but tens of millions engineers worldwide, working deeply inside the models on daily basis - IMO, that's the safer way to handle technology with such fat-tailed distribution. Your thoughts?
Alexey Mitin’s Post
More Relevant Posts
-
It is fascinating to watch new developments in AI!
AI Shocks Again: DeepMind V2A, AI BRAIN, OpenAI Nuclear AI, GPT-5 & More (June Monthly News)
https://www.youtube.com/
To view or add a comment, sign in
-
Few days back, I saw an old movie, War Games (1983). It was about an AI computer with nuclear arsenal under its command, so that, when launch orders are given, it doesn't hesitate like humans. The problem started when the AI computer behaved differently than what was expected of it. The movie was accurate in describing how the AI computer was working. It reminded me of Alpha Go (2016) and Alpha Zero. It was not only able to predict the technology but also addressed the core issue in dealing with AI - "Our understanding of AI is different from AI's understanding of self". Humans took millions of years to evolve and develop the current level of intelligence, whereas, AI took only decades to reach a level where it can answer most of our questions. The question is when will it move from 'narrow' intelligence to 'general' intelligence.
To view or add a comment, sign in
-
🚨The conversation around AI and its potential risks is heating up. Many top AI scientists and entrepreneurs are sounding the alarm, urging that “the risk of extinction from AI” be treated as a global priority – on par with pandemics and nuclear war. But here’s the thing: while some fear AI could escalate to extinction-level threats, even without that, AI still has the potential to cause massive disruption through misuse, accidents, or conflicts. And yet, there’s no consensus across the field on the true nature, scale, or likelihood of these risks. So, what’s fueling this concern? What are the risks today that could lead to even greater risks tomorrow? If you’re curious to dive deeper into the reasons behind these concerns and want to stay informed, here are some resources that have really shaped my understanding. 1. “Statement on AI Risk” - https://lnkd.in/eA4YVf4G 2. “Future Risks of Frontier AI” by the UK Government Office for Science - https://lnkd.in/eWJ8acjc 3. “An Overview of Catastrophic AI Risk” - https://lnkd.in/eHc_Z8Dt 4. “AI could defeat all of us combined” by Holden Karnofsky - https://lnkd.in/ex3dDiup 5. “Rogue AI” by Center for AI Safety https://lnkd.in/eCUuwSrD 6. “AGI Safety From First Principles” by Richard Ngo - https://lnkd.in/eEiF-TUk 7. A Youtube video by Robert Miles “Why Would AI Want to do Bad Things”. - https://lnkd.in/eXV7TtVM 8. “On the Dangers of Stochastic Parrots: Can Language Model be Too Big?” - https://lnkd.in/eFJf6vJ7 💭 What do you think - Are we overestimating the risks of AI, or are we not taking the potential dangers seriously enough? #AISafety #AIGovernance #AIRiskManagement
To view or add a comment, sign in
-
Recently the tech community (myself included) celebrated The Nobel Prize awarded to Geoffrey Hinton and Demis Hassabis in Chemistry and Physics. The fact that both of these awards recognised the potential and beneficial impact of #AI was a statement of confidence in this still-emerging technology. However, something I also appreciate that I have not yet seen called out is that the winners here are role models for the safe and responsible deployment of AI, not just for its astonishing progress. Geoffrey and Demis are both leaders in this field, and both were also among the founding signatories of the Centre for AI Safety’s statement on AI risk last year. That statement read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. https://lnkd.in/gndcx_5P Of course, it is important to have role models and experts who can see and realise the potential of AI — but it’s crucial that they also have an eye on shaping this technology in a sustainable way. Listen to Demis speak to this topic and what he hopes for with AI’s progress: https://lnkd.in/gPN6x_vZ
Unreasonably Effective AI with Demis Hassabis
https://www.youtube.com/
To view or add a comment, sign in
-
Is AI More Dangerous Than Nuclear Weapons? 🤖💥 In the rapidly evolving landscape of technology, the potential risks and benefits of Artificial Intelligence (AI) have become a hot topic of debate. but the benefit we can reap from the technology is immense. Transitioning to the frontier of technology, Generative AI is transforming the landscape. From generating creative content to solving complex problems, it's a game-changer! I am glad that I had an incredible session at GrowJunction, delving deep into the Possibilities of AI in Product Management with Mentor Kunal Parekh! 🚀 Huge thanks to Kunal Parekh for the invaluable insights and GrowJunction for hosting this enlightening session! 🌟 #ProductManagement #SkillsDevelopment #GrowJunction #ProfessionalGrowth
To view or add a comment, sign in
-
Latest updates on all of AI , sit back and relax and watch a 2 hr movie 🍿 🎥 https://lnkd.in/gBU8gwKC
AI Shocks Again: DeepMind V2A, AI BRAIN, OpenAI Nuclear AI, GPT-5 & More (June Monthly News)
https://www.youtube.com/
To view or add a comment, sign in
-
Recent reads. Life 3.0 is my favourite. Says how AI will take over the world. Right from computers making movies with realistic setup, deep fake actors and top notch story line. All the way upto where things can go crazy, when AI discovers new elements on earth, that can make nuclear bombs that is of the size of a capsule. The one who controls AI in the future will be the kingmaker. He will own news channels. Hence endup owning people's mind, therefore deciding which political party will win the election. Best work by Max Tegmark. #AI #OpenAI #ArtificialIntelligence #Books
To view or add a comment, sign in
-
Recent wargames using artificial-intelligence models from OpenAI, Meta and Anthropic revealed a troubling trend: AI models are more likely than humans to escalate conflicts to kinetic, even nuclear, war. This outcome highlights a fundamental difference in the nature of war between humans and AI. For humans, war is a means to impose will for survival; for AI the calculus of risk and reward is entirely different, because, as the pioneering scientist Geoffrey Hinton noted, ‘we’re biological systems, and these are digital systems.’ https://lnkd.in/dX_dwSZ7
To view or add a comment, sign in
-
🚨 "The Coming Wave" by Mustafa Suleyman – a call to action on AI regulation. 🤖 🎧 I recently finished listening to the Audible version of “The Coming Wave” by Mustafa Suleyman, co-founder of both Deep Mind (acquired by Google) and Inflection AI (acquired by Microsoft, sort of). He is worried that we won’t be able to “contain” AI, given the historic track record of new technologies spreading, with the notable (partial) exception of nuclear weapons. He paints an ugly scenario: a rogue biologist creating a novel pathogen with cheap DNA synthesizers and ubiquitous AI tools, releasing it and killing billions. 🧬 Suleyman argues that while containment, regulation essentially, is impossible it must be possible because the stakes are too high. He says we need to get going on this quickly to reduce risks including: engineered pathogens, cyber-attacks, and misinformation campaigns. 🛡️ 🖥️ 📰 📚 “The Coming Wave” is a worthwhile read (or listen) for anyone interested in the future of AI and its risks. Link to Audible version: https://lnkd.in/g6m9P4PQ #AI #TheComingWave #AIRegulation #FutureOfAI
The Coming Wave
audible.com
To view or add a comment, sign in