Chris Green’s Post

View profile for Chris Green, graphic

Senior SEO Consultant & Trainer, Conference Speaker & Mentor. #BeOnePercentBetter

Prompt engineering - either a meaningless term or a frustrating one - is essentially a way to write more efficient prompts for an LLM like ChatGPT or similar.  I've been asked a few times recently about prompt guides/engineering; is there an essential list of prompts we should use etc? Well, I'm not a fan of copy-and-paste prompts being offered up as THE solution ❌ Whilst you can get some mileage out of duplicating well-established prompts, the models behind them change and develop quickly, and SOME services handle some prompts better than others. Why not take the time now to practice better prompt writing? 1️⃣ Start with a broad problem/question and ask for more information about the subject. If you're really not sure where to start, this will help give you better context. 2️⃣ Ask for a process to solve the problem or work to break the problem down into the different stages yourself. 3️⃣ Give specific instructions around each point of the process, test after each time, tweaking and getting it right. 4️⃣ If something is more complex or you don't know how to simplify any more, ask for a chain-of-thought answer to see a "thinking out loud" style response. 5️⃣ Review the detail to understand what is being done and challenge any issues with examples AND detail HOW it should be. There are more specific guides here (https://buff.ly/3GInnca) for working with OpenAI, but the general points above will help the process be less frustrating and increase your output's accuracy. Once you get used to writing prompts you'll find you don't need to break it down into so many stages - it will speed up. But I'd rather get the right answer slower, than getting the wrong output really quickly, which is the usual fear in using LLM outputs.

  • No alternative text description for this image

To view or add a comment, sign in

Explore topics