🚀 Ever wondered if AI can truly "think" like humans? 🧠 Yann LeCun, a pioneer in AI, sheds light on this intriguing question! LeCun explains that current Large Language Models (LLMs) excel at tasks that require instinctive, automatic responses—what he calls System One thinking. Examples include simple pattern recognition and routine tasks. However, when it comes to System Two reasoning—tasks that require deliberate, planned thought—LLMs fall short. These include complex decision-making and strategic planning, where a deeper understanding and more thoughtful consideration are needed. The current models predict the next word in a sequence, which works well for generating text but lacks the depth needed for true reasoning. They are not capable of advanced planning or optimizing their answers in an abstract, meaningful way. LeCun envisions future AI systems overcoming these limitations through energy-based models. These models will measure how well an answer fits a prompt, optimizing responses efficiently and thoughtfully. This approach promises a significant leap forward in making AI more capable of reasoning like humans. 🌟 Curious about AI's capabilities and limitations? Watch the video and join the conversation!
Emin Sadiyev’s Post
More Relevant Posts
-
LLMs are fundamentally parrots. Would you let a parrot run your key business processes? No. Would you let it do very specific, well-defined tasks with careful oversight? Maybe. Narrow scope is key to effective #automation using #AI.
Can LLMs reason? | Yann LeCun and Lex Fridman
https://www.youtube.com/
To view or add a comment, sign in
-
Some quick observations from trying out the o1-preview from OpenAI. 1. These new models reduce the need to master prompt engineering will make conversational AI tools more useful for a broader audience 2. The new models reach a "final" good response quicker even though these models take longer to think by avoiding the need for longer conversations 3. The responses sound more convincing, making errors and mistakes in responses harder to identify There are lots of technical innovations and improvements in these models. If you'd like to see a discussion on the importance and challenges with reasoning check this excerpt from a longer podcast with Yann LeCun. https://lnkd.in/gi-6XdFQ
Can LLMs reason? | Yann LeCun and Lex Fridman
https://www.youtube.com/
To view or add a comment, sign in
-
Last month, PhD researcher Dyah Adila told Snorkel researchers about her work on ROBOSHOT, a novel approach to get better performance out of foundation models without fine-tuning. This work shows promise and could meaningfully impact how enterprise AI teams approach FM applications. Watch the video here: #airesearch #foundationmodels
ROBOSHOT: better foundation model performance without fine-tuning (Stanford researcher presentation)
https://www.youtube.com/
To view or add a comment, sign in
-
Good morning everyone! 😀 Have you ever noticed how a little twist in your question can change your customers' perception and answers? Romit C. has shared a practical joke that highlights this phenomenon. Check out the video here: https://lnkd.in/eHV57bJu. #customersuccess #AI #switzerland
Practical Joke - The art of question framing for maximum impact
https://www.youtube.com/
To view or add a comment, sign in
-
Excited to share my perspective on the fascinating topic of `LLM-as-Judge`
To view or add a comment, sign in
-
NEW: Watch the latest #KempnerInstitute Seminar Series talk: Yoav Goldberg of Bar-Ilan University and Ai2 describes his work on the limits of LLMs, and how new abilities become possible when LLMs are embedded in larger systems. https://lnkd.in/ejH3Z2gQ #ML #AI
(Some) Open Frontiers with LLMs with Yoav Goldberg
https://www.youtube.com/
To view or add a comment, sign in
-
https://lnkd.in/eGWtx6YS My interest in LLMs and AI is leading me to persue an older unanswered question. Everyone is talking about "Artificial Intelligence", yet I fear we never spent as much time talking about "Intelligence" itself. intelligentia sine artificio Bruce Forcing Thought.... interesting. #joscha #reasoning #ai
LLMs and Reasoning - @lexfridman
https://www.youtube.com/
To view or add a comment, sign in
-
In part 4 of my series exploring number theory I present the outline of my proof of the ABC Conjecture.
Beyond Binary Claims: Mathematical Discovery Through Human-AI Collaboration Part 4
kenclements.substack.com
To view or add a comment, sign in
-
I am sharing some screenshots from his lecture series. #artificialintelligence
AGI Speech by LeCun
link.medium.com
To view or add a comment, sign in
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
6moLeCun's insights on AI's current capabilities resonate with past advancements in technology, reminiscent of early computer systems' limitations in complex reasoning tasks. As we navigate AI's journey, paralleling it with historical leaps, what advancements in energy-based models do you foresee to bridge the gap between System One and System Two thinking in AI, particularly in domains requiring nuanced decision-making and strategic planning?