Full video of my fireside chat with Gaurav Agarwal on the future of AI, world models, planning and reasoning, the limitations of LLMs, AI as an empirical science, why neural net were shunned, and how India and other countries could jumpstart their AI ecosystem by creating industry research labs with ambitious goals.
Yann LeCun, Turing Award Winner and Chief AI Scientist at Meta spoke to Gaurav Aggarwal about the Sutras required for Strategic Autonomy in AI for India. He emphasised that establishing pre-commercial AI research and development is critical. This approach requires new institutions. As he mentions in the clip, the Indian equivalent of leading AI research institutions—similar to Facebook AI Research (FAIR)—is more likely to emerge from domestic corporations. This is India’s opportunity to lead AI development from within. You can listen to the full conversation here: https://lnkd.in/gAsXifJw - with Sharad Sharma
Fireside chat with Yann LeCun hosted by Gaurav Aggarwal, iSPIRT
https://www.youtube.com/
LLM guys never promised AGI. Not sure why it is perceived to be their technical debt.
I'm surprised the industry and the open source community have been sleeping on JEPA.
This discussion sounds like a treasure trove of insights! Especially intrigued by the idea of industry research labs driving AI innovation in countries like India. Definitely adding this to my watchlist!
Interesting
Great advice Yann. Thanks
Author of Building Scalable Deep Learning Pipelines on AWS
3wThanks for sharing your thoughts! I really like the golden outfit, by the way. 😊 Inference by optimization, as you describe it, does sound, in principle, superior to inference by forward propagation in neural nets, and your JEPA/world model proposition seems like a compelling blueprint for the next generation of AI. However, I wouldn’t be surprised if there’s pushback on the suggestion of abandoning LLMs, especially since Meta is still promoting LLAMA, which is built on them. I think the AI community still needs to continue developing LLMs for now, and once new architectures prove to be practically superior, the tech world can make the switch—just as RNNs were eventually abandoned for Transformers. And yes, you're absolutely right about Hinton being wrong when it comes to LLMs and subjective experience. It's surprising that he still holds that view. Finally, I sometimes wonder why the focus is on achieving human-level intelligence in AI. Wouldn't it be cheaper and more efficient to just have more babies?