RTFP: Rewrite the f_____ prompt. I’m borrowing the (crude) tone from the original acronym, RTFM, just to remind myself of the state of the tech after seeing a glowing post about using Claude computer control for a brief experiment…and an agonized commenter responding about using it for a genuine problem and failing after a significant amount of time trying to get the prompts “just right” to solve the rest of it (and the new issues it created). If RTFP is the advice you get when you’re having trouble getting what you want out of an AI interface…it’s not you. It’s a reflection of how unstructured and experimental it is now. It’s not the magic box that can solve your problems as easily as it appears to solve a (non-critical) demo. It’s worth tinkering with it if you don’t get what you need immediately, if only to clarify your own thinking about what you need…but keep it focused, timebox it, and be prepared to fall back to what works for solving your problems.
Julie Meridian’s Post
More Relevant Posts
-
🔥 Dify v0.7.0 is out! We've launched Conversation Variables and Variable Assigner nodes in Dify v0.7.0, tackling LLM memory limitations. These features enable precise storage, retrieval, and updating of context information throughout the conversation flow. Supporting structured data types, they give chatflow-built LLM apps precise memory control, boosting LLMs' ability to handle complex scenarios in production. - Read the blog: https://lnkd.in/gz-yBJJj - Docs: https://lnkd.in/gxXuerG3 We've also added new models and tools, and improved workflow functionality to enhance your AI apps. See the full changelog: https://lnkd.in/ga73_wcF
To view or add a comment, sign in
-
📦 To help you get started, we've prepared two templates that showcase how to easily implement Conversation Variables and Variable Assigner nodes in your chatflow, so let's dive in and explore these powerful new features together! 'Explore' -> Patient Intake Chatbot & Personalized Memory Assistant
🔥 Dify v0.7.0 is out! We've launched Conversation Variables and Variable Assigner nodes in Dify v0.7.0, tackling LLM memory limitations. These features enable precise storage, retrieval, and updating of context information throughout the conversation flow. Supporting structured data types, they give chatflow-built LLM apps precise memory control, boosting LLMs' ability to handle complex scenarios in production. - Read the blog: https://lnkd.in/gz-yBJJj - Docs: https://lnkd.in/gxXuerG3 We've also added new models and tools, and improved workflow functionality to enhance your AI apps. See the full changelog: https://lnkd.in/ga73_wcF
To view or add a comment, sign in
-
Artificial intelligence model helps produce clean water - Tech Xplore: The researchers first built a random forest model, a tree-based machine learning technique utilized for regression problems, and then applied it to ... http://dlvr.it/TDb0lS
To view or add a comment, sign in
-
I was able to give some lipid profile test images into Gemini (not the pro version) and get the following line chart from it, amazing 😍 . Also, I tried Chat GPT 4 (copilot Windows) and Claude AI both failed. Prompt is : Can you analyze the following lipid profile report images and plot a line graph where X axis should be time (consider year and month from the report) and Y axis should represent the value. There should be multiples lines for "Cholesterol -Total", "Triglycerides" , "HDL-C", "LDL-C" and "CHO/HDL-C Ratio" Result
To view or add a comment, sign in
-
Last week, we benchmarked the new 3.5 Sonnet (Upgraded) model on our five datasets. Here are the results: - On Legalbench, it's now exactly tied with GPT 4o, and beats 4o on CorpFin and CaseLaw - It usually, but not always, performs a few percentage points better than the previous version - for example, on Legalbench (+1.3%), ContractLaw Overall (+0.5%), and CorpFin (+0.8%). - There are some instances where it experienced a performance regression - including TaxEval Free Response (-3.2%) and CaseLaw Overall (-0.1%). - Although it's competitive with 4o, it's still not at the level of GPT o1, which still claims the top spots on almost all of our leaderboards. You can view the full results at https://www.vals.ai
To view or add a comment, sign in
-
Andrej Karpathy introduced the concept of LLMs not as chatbots, but the kernel process of a new Operating System, where it orchestrates: - input & output across modalities (text, audio, vision) - code interpreter, ability to write & run programs - browser / internet access - embeddings database for files and internal memory storage & retrieval I created LLM-OS which local implementation of the LLM-OS idea where multiple AI agents orchestrate and delegates tasks amongst each other in order to come up with a more robust and factually correct response. Check it out here: https://lnkd.in/gdDmrJu6
To view or add a comment, sign in
-
If you're interested in using jan.ai or something similar to access LLMs without exposing your personal data. I recommend the course “Installing, Running and Testing LLMs on Your Local Computer” by Ray Villalobos. You can check it out here: https://lnkd.in/gg29MeCV #largelanguagemodels.
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
A couple of notes about parts of LLMs, which are hardly compatible with what we call "intelligence". It is easy to forget, but LLMs use what can be called a "Mechanistic text generation" approach. Models like GPT usually generate only one output token for whole input, no matter how long. Then a thing called "generation loop" adds this resulting token to original input and uses it to produce next token. The truth is that answer generated by the loop, not by network. Imagine if people use this strategy. We would need to hear something, then say one syllable like "HE", and then after hearing it generate "LL", and, finally (hopefully) add "O" in the end. In the same category - alternatives choosing approach. Model usually generates probabilities of several possible next tokens as a continuation of the input we provided. Usually we pick most probable token. Alternatively, we can pick several and estimate their likelihood in the context, but anyway we end up with one final token. Decision which token to use is not made by LLM itself, but my a small "if" operator.
To view or add a comment, sign in
-
"Recently attended an insightful masterclass on 'Understanding Machine Learning', gaining in-depth knowledge and practical insights into the fascinating world of machine learning algorithms and their applications. Delving into key concepts and methodologies, I've enhanced my understanding of how machine learning shapes modern technologies and decision-making processes. Excited to leverage this newfound expertise to drive innovation and tackle complex challenges in the ever-evolving landscape of artificial intelligence." #NxtWave
To view or add a comment, sign in
-
#Day334 of #500DaysGenerativeAI Implementing advanced RAG approach that adapts its retrieval strategy based on the type of query. By leveraging LLMs at various stages, it provides more accurate, relevant, and context-aware responses to user queries.
To view or add a comment, sign in