Vercel has acquired Grep, a code search engine. https://lnkd.in/gHZCAb8N
Vercel’s Post
More Relevant Posts
-
We're excited to unveil a major leap forward at FinetuneDB! ✨🚀 Today marks the launch of our open-source serverless inference API, and our next-generation fine-tuning dataset manager. It's now easier than ever to fine-tune and serve open-source models, all on one integrated platform. 🔹 Open-Source Model Fine-Tuning and Serving – Fine-tune and deploy models like Llama 3 from FinetuneDB, to easily experiment with various models to find the best fit for your use case. 🔹 Version-Controlled Dataset Management – Confidently manage and iterate your existing datasets with version control, visual function calling editor, and automatic dataset validation. 🔹 Seamless Dataset Creation from Production Data – Quickly create high-quality datasets from production data, making the switch from proprietary to open-source models easy. The future of LLM development is customized open-source models, and we’re enabling that today. Our clients, including Ebbot, Roast, and OMR Reviews, are already redefining the possibilities with advanced LLM applications. Are you ready to step into the future with us? 🔗 Check our blog post for full details in the comments, and contact us for a demo. #FinetuneDB #Finetuning
To view or add a comment, sign in
-
Yet again, I've come across a company using spreadsheets to track their LLM prompts revisions... Yes, in excel, with columns for revision dates and all! So, for those who might not know, there are actual tools out there for prompt management, for example: ⏺ LangSmith Hub - the logical open-source option if you're already using LangChain ⏺ LangFuse - also open-source, super easy to setup ⏺ PromptLayer - another great user-friendly option These tools allow you to implement best practices when iterating on your prompts: 🔹 Track versions of your prompts, add tags and metadata 🔹 Test new revisions against evaluation datasets 🔹 Only push to production when everything passes ✅ This is how prompt management should be done! Please let’s leave the spreadsheets behind 😅 #PromptEngineering #LLMs #GenAI https://lnkd.in/g6QyFWfn https://lnkd.in/gv32kx2P https://langfuse.com/
The cleanest way to prompt engineer
promptlayer.com
To view or add a comment, sign in
-
I think I finally found a VS Code LLM plugin (i.e. a copilot alternative) that has the features I want. It's open source, and lets you bring your own API keys. It uses embedding and retrieval for your codebase, to help deal with projects where the files are beyond the context window, and from a doing some testing, it seems to work quite well. I removed the reranking element from config.json since that is using continue's endpoint by default which puts it into a free trial mode, but from what I see and reports from other users, you don't need reranking except for really large codebases. Eventually, it looks like they'll support additional "roll your own" options for reranking though. https://www.continue.dev/
Continue
continue.dev
To view or add a comment, sign in
-
Interesting case to consider
🗞 LazyLLM looks like a super simple low-code open-source framework for building multi-agent applications. 💡LazyLLM supports a variety of applications that leverage multi-agent systems and LLMs like RAG, fine-tuning, content-generation etc. and defines workflows such as pipeline, parallel, diverter, if, switch, and loop etc. ⛳ These are the features that struck to me most: 👉 LazyLLM allows developers to assemble AI applications easily using pre-built modules and data flows, even if they are not familiar with large models. 👉 The framework offers a one-click deployment feature which can be particularly useful during the Proof of Concept phase, as it manages the sequential start-up and configuration of various services like LLMs and embeddings. 👉It supports deployment across different infrastructure platforms such as bare-metal servers, Slurm clusters, and public clouds. 👉 It has a built-in support for grid search and allows automatic exploration of different model configurations, retrieval strategies, and fine-tuning parameters to optimize application performance The features aren't flashy or anything new, but LazyLLM is built in a straightforward way. The repository is pretty new and actively being developed, but it's nice to see so many low-code approaches being built. It helps developers from diverse backgrounds quickly build a prototype Repo: https://lnkd.in/eVSjK5ki
To view or add a comment, sign in
-
Continuing with the series of our tech stack at Jobbrella, I would like to give another shoutout to our async / events / queue handler, https://www.inngest.com/. We previously used a more light weight HTTP only library for this, but as the product grew in complexity we knew we would have to switch it out eventually so we decided to take the time to do so now rather than having to do it later down the road. I did a lot of digging around the ecosystem of queue handlers for NodeJS and settled on Inngest after some careful consideration. The deciding factor for us was the tight integration with Vercel that we use for hosting. The current setup we have allows us to have a separate production and testing environment, but also unique environments for every feature branch. It also comes with a very good local dev server with a useful graphical interface. Having run the system for a few weeks now it is in our experience absolutely rock solid, allows for easy handling of dead letters / failures of various kinds and makes it easy for QA to give detailed error reports when we mess up a new feature. And just the separation of events on feature branches (no more digging through events trying to figure out which ones were triggered on that exact env) was worth the switch for us. The SDK is also very simple to use, and allowed us to rewrite the existing events infrastructure in about one half working week. All in all, very happy with the switch!
Inngest - Queuing and orchestration for modern software teams
inngest.com
To view or add a comment, sign in
-
Meet OpenCodeInterpreter: A Family of Open-Source Code Systems Designed for Generating, Executing, and Iteratively Refining Code The ability to automatically generate code has transformed from a nascent idea to a practical tool, aiding developers in creating complex software applications more efficiently. However, a gap remains between the generation of syntactically correct ... https://lnkd.in/e-xdcm8E #AI #ML #Automation
Meet OpenCodeInterpreter: A Family of Open-Source Code Systems Designed for Generating, Executing, and Iteratively Refining Code
openexo.com
To view or add a comment, sign in
-
How can LangChain JS be used to build context-aware chatbots? *** LangChain JS can be used to build context-aware chatbots by integrating text processing, vectorization, and embedding vectors within a Supabase vector store. This tutorial provides a step-by-step guide to building a chatbot that dynamically responds to user queries based on specific documents, offering practical, real-world applications. *** Complete Tutorial On: https://lnkd.in/e57ky89G
To view or add a comment, sign in
-
CodeRabbit is an AI-driven code review tool that integrates with GitHub for line-by-line analysis and real-time chat, running static analysis and security checks with AI support. We’ve been using it on open-source repos like Atmos to speed up bug fixes, and it’s been great for onboarding newer developers. The AI suggestions are impressively accurate, catching issues easily missed in quick reviews, and it improves over time with feedback. It’s boosted our code quality and error handling, and for open-source projects, it’s free (though with lower rate limits). The only downside is that the volume of comments can be overwhelming, making PRs harder to navigate. Highly recommend it!
To view or add a comment, sign in
-
I’ve just come across an article about LangGraph— a multi-agent workflow. In some scenarios, where the setup of multiple agents is required to collaborate, this extension of LangChain allows for a carefully designed collaboration network. Multiple agents work exactly as expected in a managed and maintainable way. Read more from the official LangChain blog: https://lnkd.in/grEbVuYX In commercial settings, it’s quite common to need a highly opinionated or carefully planned workflow of multiple agents to ensure that the outputs are consistent as expected. This is different from tools like Autogen, which frames itself more as a “conversation” engine for multiple agents. It’s always exciting to study the pros and cons of different tools. There is no one-size-fits-all solution, and that’s what makes it so interesting!
LangGraph: Multi-Agent Workflows
blog.langchain.dev
To view or add a comment, sign in
-
How to cope with technology FOMO https://lnkd.in/dEepFXD9
How to cope with technology FOMO
https://avdi.codes
To view or add a comment, sign in
121,742 followers
Founder | CEO. Build a personal brand today so you are AI bulletproof.
1wGood job vercel. Can you acquire Wordpress too?