An observation is that managers and teachers are often much better at “getting” LLMs than coders. Coders deal with deterministic systems. Managers and teachers are very experienced at working with fundamentally unreliable people to get things done, not perfectly, but within acceptable tolerances.
LLMs can behave deterministically if we set the temperature to 0. Got 'em!
Gonna need to see evidence of this. SWEs have by far the highest uptick of LLM usage and simply using LLMs is the number one way of getting better with using them.
Ethan Mollick I don't get it. In dev and code generation, LLMs give perfect results most of the time, on first or at most second try. This is a contrarian point of view to "LLMs can't count R in Strawberry" or "LLMs hallucinate or give approximated results", but it's the truth. What is interesting is that result are at near 100% perfect. Saves hundreds of hours. Most coders adore this, it's a complete life changer. I have organised myself by prompt preparation and cutting so that in my next project, 90% to 95% of the code is generated with the help of a new AI that complements LLMs. The goal is to pass existing code "templates" and just ask for generation of new code. In doc, this is more incredible. I pass an old API project MarkDown file, ask to infer it in order to generate a new project doc, using a few line of prompts that describe the awaited changes. The AI takes the ancient project model as a "template", while it's not a structured at all as a real template by all means, proceeds in removing old APIs, and generate the doc for the new ones. Done in one pass, ready to dispatch. 2 to 3 hours of painful precise work reduced to 15 minutes of easy prompting. Tell me real example of a coder that "does not get it" ?
It highlights the differing mental models people bring to their interactions with technology - Managers and teachers are accustomed to navigating ambiguity and coaxing acceptable outcomes from unpredictable inputs - be it people or, in this case, LLMs. Coders, on the other hand, operate in a world where precision and determinism are paramount, which may create a disconnect when working with systems that mimic human unpredictability. To certain extent, it explains why some roles may more intuitively adapt to GenAI tools.
Entrenched thinking. Too many software engineers believe in the only one right way. I was disappointed in interviewing so many developers and to the last one of them none were using AI. I think their code craft is at the precipice of obsolescence. In my AI meetup experience it was the product people, product managers and analysts, that were at the forefront of their organizations in terms of AI. Scientists are an interesting group. They want reproducibility, but I saw it for myself, scientists are also experimentalists and tool builders. Everything is changing.
It was obvious since the beginning. They know how to explain things to other humans, as like as to human-alike. And this is where all the clues are. They should be better in making datasets though.
As a teacher, I believe my impact can come from not merely managing the limitations of LLMs but from unlocking potential. The real magic happened for me when I shifted from seeing it as an "unreliable tool" to recognizing "its ability to learn and adapt"- and how I can transform every gap into a growth opportunity. For me to work with this new digital colleague, it is not about merely working around challenges; it's about working through it together. I view the challenges of today's LLM performance not as a ceiling but as a foundation from which we can create tomorrow's capabilities. My goal is not to maintain acceptable standards of expertise but to continuously expand what's possible. For me, every "not yet" is simply tomorrow's expertise waiting to be developed. I use LLMs not just to get things done but also to enable me to co-develop together to achieve what once seemed impossible. #MyLivedExperience #NotPrescription
That’s not an observation. That’s a hypothesis.
I agree with the spirit, but I think even more than currently active coders (who quickly get the not-wholly-deterministic and not-wholly-reliable coach vibe), the most challenged are senior people who have a background of using computer systems as an analytic tool earlier in their careers. The people who built sophisticated spreadsheets or wrote SAS and MATLAB routines 20 years ago (like me!) but moved away from computer systems since then. They struggle mightily, trying to unlock the magic words to deterministrically control the LLM precisely to their liking. Analogy: My 95 year old dad co-developed one of the first mainframe computers to the East of the Iron curtain. On it, he built a precursor (by 4 years) of SPSS. But in recent decades, even before LLMs, he struggled to effectively use Google search since he was always composing precise Boolean queries, unable to trust the engine to figure it out. His early experience was telling systems precisely what to do, fully optimized. LLMs/AI is taking that a step further.
https://www.oneusefulthing.org/p/the-best-available-human-standard