Abhay Johorey’s Post

View profile for Abhay Johorey, graphic

Digital Strategy at ICICI Bank

AI is not good software. It is pretty good people. Paradoxical ? We want our software to yield the same outcomes every time. So, we ensure that software systems are reasonably reliable and predictable. Large Language Models are neither of those things, and will absolutely do different things every time. They have a tendency to forget their own abilities, to solve the same problem in different ways, and to hallucinate incorrect answers. There are ways of making results more predictable, by turning down the level of randomness and picking a known “seed” to start, but then you get answers so boring that they are almost useless. Reliability and repeatability will improve, but it is currently very low, which can result in some interesting interactions. We also want to know what our software does, and how it does it, and why it does it. We don’t know any of these things about LLMs. They are also literally inexplicable. When you ask it why it did something, it is making up an answer, not truly reflecting on its own “thoughts.” There is no good way of understanding their decision-making, though, again, researchers are working on it. Finally, we should know how to operate a piece of software. Software projects are often highly documented, and come with training programs and tutorials to explain how people should use it. But there is no operating manual for LLMs, you can’t go to the world’s top consultancies and ask them how to best use LLMs in your organization - no on has any rulebook, we are all learning by experimenting. So the software analogy is a bad one. You should actually treat AI as people since that is, pragmatically, the most effective way to use the AIs available to us today. What tasks are AI best at? Intensely human ones. They do a good job with writing, with analysis, with coding, and with chatting. They make impressive marketers and consultants. They can improve productivity on writing tasks by over 30% and programming tasks by over 50%, by acting as partners to which we outsource the worst work. But they are bad a typical machine tasks like repeating a process consistently and doing math without a calculator . So give it “human” work and it may be able to succeed, give it machine work and you will be frustrated. And, of course, the AI still lies, makes mistakes, and “hallucinates” answers. But, again, so do humans. I would never expect to send out an intern’s work without checking it over, or at least without having worked with the other person enough to understand that their work did not need checking. In the same way, an AI may not be error free, but can save you lots of work by providing a first pass at an annoying task. We need to decide what tasks we are willing to delegate with oversight, what we want to automate completely, and what tasks we should preserve for humans alone. #Genai #LLMs https://lnkd.in/gzEtUqvr

  • No alternative text description for this image
Amrinder Singh Deol

Innovative Marketing and Creative Professional

7mo

👍👍👍 Your point about deciding which tasks to delegate, automate, or reserve for humans is crucial. As we navigate this evolving landscape, thoughtful integration of AI into our workflows can lead to impressive outcomes.

To view or add a comment, sign in

Explore topics