Can LLM's reason without prompting? We have all worked on "prompt engineering", trying to convince our recalcitrant LLM on the right way to proceed in providing solutions to the problem at hand. What if that were not required? What if LLM's actually could reason on their own? This is what the paper "Chain of Thought reasoning without prompting - https://lnkd.in/gpmwk3uW" attempts to do. They find that " CoT reasoning paths can be elicited from pre-trained LLMs by simply altering the decoding process". Apparently, the top-k decoding process has the CoT paths built in. " Extensive empirical studies on various reasoning benchmarks show that the proposed CoT-decoding effectively elicits reasoning capabilities from language models, which were previously obscured by standard greedy decoding." A very interesting read and one that implies that maybe, the days of "prompt engineers" earning their pay, which, great while it lasted, might be coming to an imminent end.
Checkout DSPy! This is programming prompting and I am almost certain that’s the killer in the Gen AI TOWN!
At Lacuna Astro, we're actively exploring this innovative approach to generate detailed, step-by-step reports with precision. However, we're not eliminating the idea of "prompts before prompts," as a basic instruction is essential to trigger CoT reasoning—without it, the results might get a little too creative! 😂 We're also experimenting with optimizing the model's context window and leveraging RAG (Retrieval-Augmented Generation) to ensure consistency and relevance in every analysis. Exciting times ahead—keep inspiring and enlightening! Arun Krishnan sir ❤️