Prompt engineering - either a meaningless term or a frustrating one - is essentially a way to write more efficient prompts for an LLM like ChatGPT or similar. I've been asked a few times recently about prompt guides/engineering; is there an essential list of prompts we should use etc? Well, I'm not a fan of copy-and-paste prompts being offered up as THE solution ❌ Whilst you can get some mileage out of duplicating well-established prompts, the models behind them change and develop quickly, and SOME services handle some prompts better than others. Why not take the time now to practice better prompt writing? 1️⃣ Start with a broad problem/question and ask for more information about the subject. If you're really not sure where to start, this will help give you better context. 2️⃣ Ask for a process to solve the problem or work to break the problem down into the different stages yourself. 3️⃣ Give specific instructions around each point of the process, test after each time, tweaking and getting it right. 4️⃣ If something is more complex or you don't know how to simplify any more, ask for a chain-of-thought answer to see a "thinking out loud" style response. 5️⃣ Review the detail to understand what is being done and challenge any issues with examples AND detail HOW it should be. There are more specific guides here (https://buff.ly/3GInnca) for working with OpenAI, but the general points above will help the process be less frustrating and increase your output's accuracy. Once you get used to writing prompts you'll find you don't need to break it down into so many stages - it will speed up. But I'd rather get the right answer slower, than getting the wrong output really quickly, which is the usual fear in using LLM outputs.
Chris Green’s Post
More Relevant Posts
-
I gave ChatGPT's o1-preview model the transcript of my wife's next YouTube video, and it actually came-up with a better title than I did (which is a first). Much better than the cheesy ones from 4o. I've found the new model to be significantly better than 4o at the tasks I use it for most...brainstorming, analyzing ideas, and summarizing complex documents. 4o responses feel bland in comparison. o1-preview feels more like talking to a knowledgable person. Less of the standard 12 bullet points in every response. I haven't gotten a chance to test its coding capabilities yet, but my expectations are high. One annoying caveat...the limit for o1-preview is 50 messages PER WEEK. This is a big step down from the daily limits of previous models. OpenAI is beginning to confirm an extrapolation I predicted a while back... Eventually we'll get a model capable of doing some impressive work for us, but the first version of it might be so computationally expensive that you only get to ask it one question a year. My wife visited Delphi this week and took this picture. Might we be heading toward something like the mythical oracle?
To view or add a comment, sign in
-
Hate creating processes like me? This is for you. Being on the mildly chaotic end of the organised - chaos spectrum of working, creating processes doesn't come naturally to me. I've had to work on ways to do it better, because I know they're super important - especially in a service business. This is my favourite hack using AI: 1. Use Loom to screen record yourself doing the task, talking through what you're doing. 2. Copy the transcript from the Loom video 3. Paste it into this SOP Wizard GPT I built that turns the transcript into a formalised SOP. Bingo - SOP document that you can then put into your ClickUp library of SOPs. The GPT says it's Amazon focused, but it should work for anything. https://lnkd.in/e9zPmGDu Below is the kind of output you'll get from the GPT: *Standard Operating Procedure (SOP) for Creating a Process Document Using Loom and ChatGPT* Purpose: This SOP outlines the steps to create a formalized process document using Loom for video recording and ChatGPT for generating the document’s content. This method simplifies documenting processes, particularly for those who struggle with writing. Tools Needed: 1. Loom Browser Extension 2. ChatGPT (SOP Wizard) 3. ClickUp (for organizing SOP documents) Procedure: 1. Install Loom Browser Extension • Ensure that the Loom browser extension is installed on your web browser. 2. Record the Process Using Loom • Open the Loom browser extension and start a new video recording. • Perform the task you want to document while narrating each step clearly. • Tip: Talking through each step is crucial as the transcript of your narration will be used to generate the SOP. 3. Stop and Save the Recording • Once the task is completed, stop the Loom recording. • Save the video to your Loom dashboard. 4. Retrieve the Transcript • Navigate to your Loom dashboard. • Locate the recorded video and copy the transcript generated by Loom. 5. Generate the SOP Document Using ChatGPT • Open the SOP Wizard (ChatGPT interface). • Paste the copied transcript into the SOP Wizard. • Press return/enter to submit the transcript. • ChatGPT will process the transcript and generate a structured SOP document. 6. Organize the SOP Document • Copy the generated SOP document. • Paste it into your SOP library in ClickUp. • Ensure the document is properly labeled and organized for easy reference by the team. Notes: • Ensure clear and concise narration during the recording to produce an accurate and useful transcript. • Regularly update the SOP library in ClickUp to keep all documents current and accessible. By following these steps, you can efficiently create comprehensive SOP documents that can be easily referred to and shared within your organization.
To view or add a comment, sign in
-
This is an excellent article describing how best to talk to ChatGPT so that you can ask questions in a way which gives you the answers you seek. It's a good grounding in how to best use ChatGPT and other AIs such as Gemini and CoPilot.
What Is Prompt Engineering: A Complete Guide
social-www.forbes.com
To view or add a comment, sign in
-
𝗖𝗵𝗮𝘁𝗚𝗣𝗧 is 𝗸𝗶𝗹𝗹𝗶𝗻𝗴 𝗶𝘁𝘀𝗲𝗹𝗳 rather than killing 𝗦𝘁𝗮𝗰𝗸-𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄? Came across an interesting post that suggests - decline in user of stack-overflow will in-turn kill ChatGPT for dev. <- Read that again, Seemed counterintuitive to me at first. However, the author made a convincing theory (summarizing here, as unable to find the original post): -> The LLM models are trained on datasets including that of stack-overflow. ChatGPT being one of them. -> Since developers use ChatGPT, the newer issues that developers discover will not be available in Stack-Overflow. -> This will lead to no updated data for LLM models to learn from in future. -> Hence, the author concluded : people will stop using the LLM and go back to stack-overflow. That's kind of the gist. I was 90% sold on the theory. However, I later realized - • The input to version-0 LLMs won't be the only inputs to future-version. The 𝗟𝗟𝗠 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗶𝘁𝘀𝗲𝗹𝗳 𝗶𝘀 𝗮 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗹𝗼𝗼𝗽 𝗮𝗻𝗱 𝗶𝗻𝗽𝘂𝘁. • 𝗜𝗻 𝗖𝗵𝗮𝘁𝗚𝗣𝗧'𝘀 𝗼𝘄𝗻 𝘄𝗼𝗿𝗱𝘀 - "Large language models don’t just rely on static data sources; they are increasingly trained on data generated from interactions with themselves." • If Stack-Overflow activity slows down, developers and companies may create alternative communities or platforms that serve as fresh data sources(or resort to reviving it). 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲-𝘀𝗵𝗮𝗿𝗶𝗻𝗴 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗲𝗻𝗱; 𝗶𝘁 𝗲𝘃𝗼𝗹𝘃𝗲𝘀 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝘂𝘀𝗲𝗿 𝗯𝗮𝘀𝗲. What do you think? - is my realization valid enough to dispute the theory or OP is right? 🧐 #NotRareButBasic #LLM
To view or add a comment, sign in
-
Have you ever wondered how to refine a basic prompt you type into ChatGPT to get better results? I've developed a detailed, step-by-step guide (https://lnkd.in/eJP6rdXp) with practical examples to help you get the most out of ChatGPT. This guide was created for my talk earlier this week, “Prompt Engineering is the New SEO,” which launched a new series of AI-focused talks and workshops. Starting next week, I will write a blog post for each step outlined in the Jupyter Notebook from this series. I would love to hear any feedback or questions you may have!
To view or add a comment, sign in
-
I finally put aside some time to read Britney Muller's guide to LLMs. I highly recommend it to anyone looking for a good overview and a beginner's guide to the technology behind tools like ChatGPT. The section in part 2 about bias in the training sets is fascinating. Here's part 1: https://lnkd.in/eV7gRWus
What Are LLMs (like ChatGPT)? [Part 1] - Data Sci 101
https://datasci101.com
To view or add a comment, sign in
-
The cat's out of the bag: the secret hack to extract your Zap as JSON and create documentation in <5 mins. 🙀 You already know AI can read the hell out of JSON. You already know AI is great at summarizing stuff. What you didn't know is that your Zapier workflow can be exported to JSON (even if you don't have the Teams subscription). Hidden in Zapier's source code is a script that contains the entire JSON representation of your workflow, and it only takes seconds to copy/paste it into ChatGPT. Together with my honed-in prompt, you'll be polishing off documentation in mere minutes. Here's the secret: Zapier is built on Next.js and publishes all the relevant data about the current page in a script containing JSON with the id "__next_data__". You can inspect element, copy that JSON, paste it into ChatGPT, and get back documentation for the workflow in just a couple of seconds. I've experimented with many variations of prompts for generating documentation, and I have one that works really well for complex workflows. Link in the comments for the full guide! 🔗
To view or add a comment, sign in
-
I spent 4 hours writing ChatGPT prompts today. Here's what I learned. The best way to write a GPT prompt is by... LEVERAGING GPT to create it: This is the thing that I learned from Alexander Tsvetkov. Here's the framework I use: -Context - Give context about the business -Role - Give GPT a Role -Instructions - Give clear instructions -Examples - This one is the most important - give clear examples of what the output should look like I've seen that 5 examples are a minimum for getting quality responses. What do you think?
To view or add a comment, sign in
-
🚨 OpenAI's OFFICIAL prompt writing guide 5 strategies to get powerful results with your prompts: 1. Write clear instructions: • Include details in your prompt to get more relevant answers • Specify the steps required to complete a task • Provide examples ________ 2. Provide reference text: • Instruct ChatGPT to answer using a reference text, such as a link to a PDF or a website • Instruct ChatGPT to answer with citations from a reference text ________ 3. Split complex tasks into simpler subtasks: • Since there is a limit on how much text you can insert into ChatGPT, summarize long documents piece by piece to stay within the limit • For prompts that involve multiple instructions, try breaking the prompts into smaller chunks ________ 4. Give ChatGPT some time to think: • Instruct ChatGPT to work out its own solution before rushing to a conclusion • Ask ChatGPT if it missed anything on previous passes ________ 5. Test changes systematically: • Evaluate ChatGPT’s outputs with reference to gold-standard answers Image source: MindBranches
To view or add a comment, sign in
-
Hallucination - by Andrej Karpathy ==================== I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines. We direct their dreams with prompts. The prompts start the dream, and based on the LLM's hazy recollection of its training documents, most of the time the result goes someplace useful. It's only when the dreams go into deemed factually incorrect territory that we label it a "hallucination". It looks like a bug, but it's just the LLM doing what it always does. ================= At the other end of the extreme consider a search engine. It takes the prompt and just returns one of the most similar "training documents" it has in its database, verbatim. You could say that this search engine has a "creativity problem" - it will never respond with something new. An LLM is 100% dreaming and has the hallucination problem. A search engine is 0% dreaming and has the creativity problem. ======================= All that said, I realize that what people *actually* mean is they don't want an LLM Assistant (a product like ChatGPT etc.) to hallucinate. An LLM Assistant is a lot more complex system than just the LLM itself, even if one is at the heart of it. There are many ways to mitigate hallcuinations in these systems - using Retrieval Augmented Generation (RAG) to more strongly anchor the dreams in real data through in-context learning is maybe the most common one. Disagreements between multiple samples, reflection, verification chains. Decoding uncertainty from activations. Tool use.......... All an active and very interesting areas of research. =============== I know I'm being super pedantic but the LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. ================= https://lnkd.in/gSHkJtyp
To view or add a comment, sign in