Ethan Mollick’s Post

View profile for Ethan Mollick, graphic
Ethan Mollick Ethan Mollick is an Influencer

I would be pushing for more people to learn formal prompt engineering if we actually had a science for prompting that was consistent, not weird, and did not involve complex psychological guesswork about what quasi-intelligent machines trained on all of human language might like to do LLMs are weird

  • chart, bar chart

Of course prompt engineering (especially iteration) is essential for repeated or scaled use. It just isnt that valuable for most people most of the time. https://www.oneusefulthing.org/p/getting-started-with-ai-good-enough

Che Gamble

Cloud Data Specialist, Architect & Consultant at Davies Technology Solutions

2d

Language is inherently abstract and evolves over time. You aren’t going to get a syntax, just frameworks, patterns and guidelines. Ultimately, what works, will be down to trial and improvement. For use cases where you can empirically evaluate the output, I’ve found in my work I’ve learnt what generally works - and what generally doesn’t. Though, automatic prompt optimisation would be more efficient at this (though, can overfitting apply to prompts, too…?) Some knowledge of what goes on under the hood helps, too. LLMs are more language calculators than databases. When you treat them like that, you learn your job is to guide the model to get your desired output. Keywords, recency bias, activating pathways to get your ‘statistically most probable output’. Not expect, like you would with a statically typed scripting or querying language. This is what I love most about Generative AI. Anyone can use it and learning to use it is more like play rather than a studying session on syntax with coding or programming languages. It inspires creative thinking when articulating and conceptualising tasks, instructions and prompt patterns.

Matt Strain

Making AI Accessible & Actionable for executives and their teams. ex-Apple, ex-Adobe.

2d

Conversation > Engineering. Instead of striving for a rigid science of prompting, we’re better served by cultivating the ability to ask better questions and engage in iterative dialogue. This mindset seems more scalable and mirrors how we solve problems collaboratively in human contexts, making AI interaction both more intuitive and impactful. Phrases like: ► "What do you need to know to…?” ► "What have I missed?" ► "What are the underlying assumptions here?" ► "Can you provide alternative perspectives?" ► "What steps would make this output better or more actionable?"

A key issue often ignored is running a prompt at scale. For example, if you want to analyze a clause as it is written in thousands of agreements with a particular prompt - ensuring that it works well for all the different scenarios is difficult. Even after few shot prompting, you will still run into surprises. Fine tuning makes this better, but doing that while not impacting performance of the model on other prompts is very difficult to do.

Prompt engineering is nowhere specific like SQL or other DSL-type languages. It's a 'natural language' approach that can be hampered by the wrapper (system) prompt. I don't think it can be easily standardized across the industry or LLM(s).

David Cropley, PhD

Professor of Engineering Innovation at University of South Australia

2d

I think the big assumption here is that there is such a thing as a stable, predictable, reliable approach to "prompt engineering". My experience is that people see what they want to see and what they believe are subtle differences in prompting are mostly a mix of luck and confirmation bias.

Gianluca Mauro

AI entrepreneur, public speaker, and troublemaker | Follow me for hot takes on the world of AI 🤌

1d

What you describe is NOT prompt engineering. Prompt engineering involves defining KPIs for your prompts, measuring accuracy, reliability, consistency, etc. Prompt engineering is for who builds AI products, not for the casual ChatGPT user.

Vince Kellen, Ph.D.

CIO @ UCSD - Helping organizations master IT

2d

I see a future job: AI forensic psychologist. The person has to talk to the AI to get it to explain why it did what it did.

Richard Rosenow

Keeping the People in People Analytics | People Analytics speaker, blogger, keynote, & podcast guest | People Analytics Strategy at One Model

2d

I've wondered about this. Does it get this particular question about 'Strawberry' wrong because the training data consists of human interactions where humans are wondering whether there is 1 or 2 at the end of the word and not interacting with the word as a riddle? For instance, if my kid was writing a grocery list and asked me "how many R's are in strawberry", because the setting is not riddles and I know they were learning to spell, I would presume they are talking about the end of the word which in their mind could have 1 or 2 (strawbery vs strawberry). I'd probably say 2 and help them spell it rather than be confusing and say three (which could result in strawberrries). I imagine this comes up because we're all asking google and other search engines across the training data that question and for that reason (we collectively are not great at spelling online). Are there other spelling riddles where LLMs fail at the riddle part, but ultimately provide advice that would help someone spelling the word? Maybe chatGPT will know 😂

Ze'ev Abrams

CoFounder at Iteraite - an AI Product Management Agency

2d

I still think this is stupid - and shows how NOT to develop future AI. If you knew anything about the brain, you'd know that different parts have different architectures. And although there is a "general" similarity in many parts, there are other parts that have massively different architectures. So trying to solve this kind of problem with just LLMs is overkill, and not "intelligence" - rather the lack thereof.. And in case you think I'm just blowing steam - here's an example of a chatbot I made using a smarter way of approaching such problems. And at a fraction of the price that gpt4000 costs! Why are we burning holes in the ozone layer (!?) to do the "wrong thing" when we already have solutions for this! [And it probably isn't what you think]

  • No alternative text description for this image
See more comments

To view or add a comment, sign in

Explore topics