We are 29 minutes from midnight! Lots of people are talking about the risks of AI, but Michael Wade and his team at IMD have built a data-driven AI Safety Clock to measure how close we are to Uncontrolled Artificial General Intelligence (UAGI). Would love to know what you think. Leave comments below. #imdimpact
Thanks for sharing the article David(and Michael). I’m curious if elements as mentioned in the book The Anxious Generation (by Jonathan Haidt) are also covered in the AI Safety Clock. Although ‘the’ social media algorithms might not 100% fall within the definition of AI or UAGI, the effects (may) already harm big time, at least on the long(er) run. The book opens with a stunning story addressing the question if you would allow your daughter to join a trip to Mars - without any research to safety effects for the participating children… 🤔
Thank you. This is a great initiative and very much appreciated. Will be interesting to see and hope that the clock will go back! What I would further appreciate is to have indicators of where AI is helping and enhancing our world (indicators would need to be defined ) and where the influence is having a significant impact on jobs ( I believe one of the greatest fears) . As I am working on solutions with AI, bringing solutions with AI to market and believing in the general good of human kind - and that there is a lot of positive aspects., remaining however very critical at the same time, as it does not need much nor many to abuse technology.
Very bright idea to bring up such an ‚assessment‘ to have some kind of evaluation on where we stand. As an individual I oscillate between fascination and concern of the benefits of AI, so thank you so zooming out and putting perspective to it.
This is not just interesting but extremely timely given the very recent news that legislation to have oversight on AI has been rejected in California (surprise...). One can only applaud such initiatives by IMD - it is of paramount importance to be informed and do the best to stay ahead of the game (if possible).
Very interesting.
AI Evangelist I Author I Critic I Human Potential Explorer
2moVery interesting post, David Bach. While uncontrolled AI presents challenges, assuming an apocalyptic outcome at this stage underestimates human resilience and governance history. Access to AI, even if democratized, does not mean everyone will suddenly possess the capability to create large-scale destruction akin to nuclear threats. Instead, AI tools are fundamentally different, as they involve skills and resources that are not easily misused for catastrophic purposes. Fake news didn't have much impact in the recent UK elections, too. The nuclear arms race, for example, led to tight international control through treaties like the Non-Proliferation Treaty, demonstrating humanity's capacity for proactive regulation. We are getting at it.