Raman Sharma’s Post

Many early carefully hand-crafted software code optimizations became irrelevant as CPUs became faster and better. Similarly, many early optimizations in AI apps were meant to circumvent the limitations of LLM context window sizes. As the context windows have expanded, many applications can now stuff them with all possible context and find a good enough answer in most cases. While this is true for coding scenarios as well, we do see that having a large context window is helpful but not sufficient for large enterprise codebases. This is where global search and graph retrieval techniques help produce accurate results. This blog post from Beyang Liu, written in partnership with Google, explains in detail - https://lnkd.in/ghxn8twF

Toward infinite context for code

Toward infinite context for code

sourcegraph.com

To view or add a comment, sign in

Explore topics