Drama in AI history For the first time in the AI history, there is more drama from the humans who develop AI than the AI that these humans develop. It started with #Geoffrey Hinton, the "Father of AI", when he announced his departure from Google because of the harm the latest #AI was likely to cause to #humanity. Satya Nadella was pushing boundaries of AI solutions & their availability, without the seeming regards for human #safety. And Sundar Pichai was following suit, which Geoff apparently did not like. Ilya Sutskever, OpenAI cofounder & brain behind some of the high-impact breakthroughs in AI (not just OpenAI) of the past decade, also did not like the direction in which OpenAI was headed - not in terms of tech (because he was building it) but the business strategy in using this emerging tech. So, he ousted #Sam Altman from the company after taking its then Board into confidence. But Satya Nadella weighed in heavily, reinstated Sam with a new board & came out as one of the most influential #CEO of our times. 6 months later, Ilvy & #Jan Leike left the company. Ilvy left silently, but Jan shared his concerns of OpenAI not working on Safety & Security aspects of the future AI "horrors". Even after Sam / OpenAI publicly declared last July about dedicating 20% of the computing power to Superalignment team over next four years, they had not yet lived upto it. Meanwhile, #Elon Musk continued to question contradiction of OpenAI's existence vis-a-vis its founding principles. Starts #xAI to contest OpenAI and recently got the most funding anyone has ever received within A, B or C rounds - $6bn at the valuation of $24bn - all within 15 months of starting it! There are two camps in the world right now: AI-doomers & AI-non-doomers. AI-doomers believe AGI (Artificial General Intelligence - the ultimate AI state, supposedly) is around the horizon and can doom humanity. AI-non-doomers believe AGI is far-far away and cannot doom humanity. AI-D has Geoff Hinton, Elon Musk (even though he himself is building the same with xAI), Steve Wozniak & 1000+ scientists who signed an open letter in March 2023 to pause AI-work for at least six months. AI-ND includes Yann LeCun, Andrew Ng and likes. While we track these developments, only time will tell if Satya Nadella ends up being one of the most influential CEOs OR most #harmful CEO of our times (for his apparent disregard to malicious use of democratized & easily accessible AI). Either ways, he'll go down in the AI history for making history. Do you belong to either of the camps? If yes, then share your why? #ResponsibleAI #EthicalAI
Very insightful article
👍🏼Fascinating dynamics in the world of AI development! Ramprasad G It's crucial for tech leaders to prioritize human safety and ethical considerations. Thanks for shedding light on these important issues! 🙌🏼