Thomas LaToza’s Post

View profile for Thomas LaToza, graphic

Associate Professor at George Mason University & Co-Founder OurCode

Check out Guy and Simon's new podcast on AI Native Development, reflecting on where AI assisted dev is today and where it may be going in the future as the whole process of development is slowly reoriented around the new potential of what LLMs can do. One big theme is how the act of programming may shift specifying the how to specifying the what - devs write down intent and let LLMs figure out the code. A big question is what, exactly, this looks like. In the world of today, devs write a comment, and the LLM writes a few lines of code. But in the world of the future, the vision is that the LLM is doing even more. How do devs really specify their intent enough to let the LLM know what they're looking for? Answers come from many different perspectives and disciplines. The formal methods community has thought longest about building unambiguous specifications, and has also seen closest just how challenging it is to make a spec unambiguous and then get a developer to write it. Programming by example initially popularized a lot of the ideas of specifying a few examples and then inferring a program, particularly in the domain of things like building scraping scripts with CoScripter. A ton of work in program synthesis, relying on enumerative search with some clever heuristics rather than LLMs for code generation, played a lot with making everything more live and interactive, where a dev can tweak some specs and get a new program. More recently, model based testing is thinking about new representations for specs. I think the biggest takeaway is that specifying intent is harder than it seems and that there may be no solution for all problems and all domains. Going down the formal path to unambiguity is attractive, but can make intent even harder to write than code. Staying informal seems easy, but inevitability means things will go wrong, and requires devs to have a clear way to look at what's getting generated, check it, and iterate fast. And there's a tradeoff between focusing on inferring intent through a few examples and making broader more declarative statements. Beyond all that, there's also a huge challenge with modularity. The biggest challenge with programming by example style systems was that examples could be contradictory (and even when they weren't changing them could have unpredictable impacts). On the one hand, it's great to only specify your intent once, and have it hold across a whole codebase. Wouldn't it be great to write one policy to, say, always make implementation choices to prioritize minimizing time over memory usage, or vice versa, and have an LLM figure out how? But what happens when this conflicts with intent somewhere else, which says this needs to be low enough memory usage to run on a specific device? Even keeping intent much lower level may still raise this issue, as one of the diciest challenges in programming is dealing with hidden and unexpected effects. Look forward to seeing how this all gets resolved.

View profile for Guy Podjarny, graphic

Founder & CEO at Tessl , Founder & Board Member at Snyk

What does AI *Native* Software development mean? How does it differ from AI Assisted dev, or compare to Cloud Native dev? Simon Maple and I discuss this in our first podcast episode of the AI Native Dev podcast. It's fun to be back on the podcast trail, digging into a new topic! This episode has more of me sharing my own perspectives and thesis on AI Native Software Development, but I'm looking forward to hosting the brilliant guests we have in the queue and hearing their views. Give it a listen here: https://lnkd.in/eGJH7z3n

To view or add a comment, sign in

Explore topics