Leslie D'Monte’s Post

AI systems routinely outperform humans, affirms the Stanford AI Index Report 2024; Should we be worried? There's hardly a day that goes by without someone speculating if AI systems like GPT-4, Gemini 1.5 Pro, Claude 3, and now LLaMA 3 are becoming sentient, the fear being that AI's intelligence, what we better know as artificial general intelligence (AGI) or artificial super intelligence (ASI), will exceed that of the most intelligent humans, making it a sort of Alpha Intelligence that may eventually even enslave humans. While #Nvidia CEO Jensen Huang proposed that AGI would emerge within five years, and Ben Goertzel foresees it happening in just three years, Elon Musk predicts that by the end of 2025 or early 2026, AI will be smarter than any single human, and probably smarter than all humans combined by 2029. Meta CEO Mark Zuckerberg, too, has announced that his company is entering the race to make an AGI but did not provide any timeline. These are all credible voices and we must pay attention. But one also needs to pay heed to equally-accomplished voices like those of Yann LeCun, Andrew Ng, and Gary Marcus, who have begged to differ and have consistently tempered expectations in the context of AGI. Early this month, even Stanford University's seventh edition of the AI Index report weighed in on the same topic, insisting that AI systems like GPT-4, Gemini, and Claude 3 are "impressively multimodal" and "routinely exceed human performance on standard benchmarks". The report, however, qualifies that current AI technology continues to have limitations and cannot reliably deal with facts, perform complex reasoning, or explain its conclusions. What do you think? LiveMint https://lnkd.in/g8VmStwv

Tech Talk by Leslie D'Monte

Tech Talk by Leslie D'Monte

livemint.com

To view or add a comment, sign in

Explore topics