Enjoyed this deepdive into how Dan Shipper has been using Gemini 1.5 Pro with the 1M context window. Some of my favourite quotes: [on feature developemnt] "It found the right place in the code and wrote the code..." [on predicting the future] "Every chat prompt will include all of our emails, and all of our journal entries, and maybe a book or two for good measure." [on what's next] "It’s an exciting time. Expect more experiments from me in the weeks to come!" 🔜 can't wait to see what else he tries out! https://lnkd.in/gvs4ABhF
Jaclyn Konzelmann’s Post
More Relevant Posts
-
5️⃣ Day 5/7: Functions, Tools, and Agents with LangChain 🚀 I’m excited to share that I’ve just completed the fifth course in my 7-day challenge on DeepLearning.AI! Today’s course, Functions, Tools, and Agents with LangChain, took my understanding of AI to a whole new level. Key Takeaways: - OpenAI Function Calling: Mastered the use of OpenAI's function calling feature within LangChain using LangChain Expression Language (LCEL), unlocking new possibilities for dynamic interaction. - Tagging and Extraction: Learned how to effectively extract and tag information from documents and text into a structured output (like json), making data processing more efficient. - Tools and Routing: Explored how to integrate multiple tools, such as APIs, into an LLM and empower it to decide which tool to use, enhancing its decision-making and problem-solving capabilities. - Conversational Agents: Built a sophisticated conversational agent that autonomously uses tools (weather API and Wikipedia API) to achieve the desired outcome, streamlining and automating complex tasks. Benefits: This course has equipped me with the skills to develop highly capable AI systems that can autonomously interact with various tools, extract valuable information, and carry on meaningful conversations to achieve specific goals. The hands-on experience with function calling and tool integration is particularly relevant for creating AI solutions that are not only smart but also versatile and adaptive. As I continue this learning journey, I’m excited about the endless possibilities these new skills bring to real-world applications. 𝐈𝐟 𝐲𝐨𝐮’𝐫𝐞 𝐢𝐧 𝐇𝐑 𝐨𝐫 𝐭𝐚𝐥𝐞𝐧𝐭 𝐚𝐜𝐪𝐮𝐢𝐬𝐢𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐡𝐚𝐯𝐞 𝐨𝐩𝐩𝐨𝐫𝐭𝐮𝐧𝐢𝐭𝐢𝐞𝐬 𝐭𝐡𝐚𝐭 𝐚𝐥𝐢𝐠𝐧 𝐰𝐢𝐭𝐡 𝐦𝐲 𝐬𝐤𝐢𝐥𝐥𝐬, 𝐩𝐥𝐞𝐚𝐬𝐞 𝐟𝐞𝐞𝐥 𝐟𝐫𝐞𝐞 𝐭𝐨 𝐫𝐞𝐚𝐜𝐡 𝐨𝐮𝐭 𝐝𝐢𝐫𝐞𝐜𝐭𝐥𝐲. 𝐈’𝐝 𝐥𝐨𝐯𝐞 𝐭𝐨 𝐜𝐨𝐧𝐧𝐞𝐜𝐭! #LearningJourney #AI #LangChain #ConversationalAI #ToolsAndAgents #ContinuousLearning #CareerDevelopment
Nacer KROUDIR, congratulations on completing Functions, Tools and Agents with LangChain!
learn.deeplearning.ai
To view or add a comment, sign in
-
The LangChain platform empowers developers to construct GenAI applications with the base model as the fundamental component, while this blog post serves as a comprehensive guide for developers to acquire proficiency in LangChain through seven systematic steps. https://lnkd.in/gxsntxFM
Learn LangChain in 7 Easy Steps
rabbitmetrics.com
To view or add a comment, sign in
-
The Emerging #LLM Stack for #RAG. Every time that a new technology bursts on the scene, there is a scramble among thought leaders, analysts, investors and start-ups to try to figure out the implications, and how to get one or two steps ahead of the early adopters and start to build out the “new platform” needed to enable the new technology for a wider audience. The search for the “platform” or the “stack” or “infrastructure” or “middleware” or “core libraries” or “service components” or “APIs” is a well-worn path in the history of Silicon Valley and the pursuit of the new-new thing. https://lnkd.in/eYtEb-jy
The Emerging LLM Stack for RAG
medium.com
To view or add a comment, sign in
-
Hi AI ML engineer and aspirants, Please check his video to expand your knowledge. where he mentioned realtime deployment of a Large language model As API.
3-Langchain Series-Production Grade Deployment LLM As API With Langchain And FastAPI
https://www.youtube.com/
To view or add a comment, sign in
-
🚀 Breaking Down the Magic of LangChain! 🚀 LangChain is changing the game when it comes to building powerful apps with Large Language Models (LLMs)! 🌟 It's quickly becoming a go-to framework, especially for Agents and RAG (Retrieval-Augmented Generation) applications. And guess what? It’s making waves across industries too! 🌍 💡 Imagine you want to build an app around a large model, like Claude 3.5 Sonnet, and also integrate it with your personal data—emails, PDFs, or even your Notion database. Plus, you need to create dynamic prompts based on user inputs, and later, switch to another LLM or tool seamlessly. Sounds complex, right? Well, here’s where LangChain works its magic! 🧙♂️ ✨ It abstracts the heavy lifting of connecting and communicating with LLMs, making it as easy as switching shoes. You just need to plug in the APIs, and boom, you’ve got the same smooth interface for any LLM vendor! No extra hassle. 🌟 Dynamic Prompts? LangChain has you covered with templates that can inject user input on the fly. 🌟 Different Data Sources? From emails to PDFs, LangChain’s document loaders give you a single interface to work with everything effortlessly. 🌟 Agents Ecosystem? It’s the heart of LangChain! 💪 LangChain makes synchronizing LLM applications a breeze, so you can focus on what really matters—innovation! 🔥 #LangChain #AI #MachineLearning #LLM #ArtificialIntelligence #DataIntegration #Agents #Innovation #AIEngineer #TechTransformation
To view or add a comment, sign in
-
I did this quick cert on Credly to learn about Gemini and how Google is building it into EVERYTHING it does. One of the most interesting things I learned from the course is how much Gemini functionality is available to Google users in their free tools.
Google AI Essentials was issued by Coursera to Tanner Dorheim.
credly.com
To view or add a comment, sign in
-
For example, at Every, we’re incubating a software product that can help you organize your files with AI. I wrote the original code for the file organizer, and our lead engineer, Avishek, wrote a GPT-4 integration. He wanted to know where to hook the GPT-4 integration into the existing codebase. It found the right place in the code and wrote the code Avishek needed in order to complete the integration. This is something just short of magic, dramatically accelerating developer productivity, especially on larger projects. … “People have been saying that data is the new oil for a long time. But I do think, in this case, if you’ve spent a lot of time collecting and curating your own personal set of notes, articles, books, and highlights it’ll be the equivalent of having a topped-off oil drum in your bedroom during an OPEC crisis.” Gemini is the perfect example of why this is true. With its large context window, all of the personal data you’ve been collecting is at the tip of your fingers ready to be deployed at the right place and the right time, in whatever task you need it for. The more personal data you have—even if it’s disorganized—the better.
I Spent a Week With Gemini Pro 1.5—It’s Fantastic
every.to
To view or add a comment, sign in
-
Learn how to build long-term memory for your chatbot by following along Deepsha Menghani's streamlined tutorial, which relies on LangChain tools to accomplish this goal.
From Ephemeral to Persistence with LangChain: Building Long-Term Memory in Chatbots
towardsdatascience.com
To view or add a comment, sign in
-
𝐑𝐀𝐆 𝐌𝐨𝐝𝐞𝐥𝐬 Retrieval Augmented Generation (RAG) models, which I have been learning to implement over the past week, are a way to implement a AI chat bot that is trained on your specific data. It's like being able to talk with your data! These models "hallucinate" much less than something like ChatGPT, if it doesn't have an answer it will tell you "I don't know" rather than making something up. If you're interested in implementing this, LangChain (a framework used for creating these LLMs) has a free course! https://lnkd.in/eXjcCqBf
LangChain: Chat with Your Data
deeplearning.ai
To view or add a comment, sign in
-
🚀 Handling large document sets for question answering can be a challenging task. When working with data or documents, we often struggle to determine the optimal number of chunks to select to provide a comprehensive answer. Invariably, there are instances where we need to analyze a substantial number of documents to address a query adequately. For example, consider a query like "Please let me know all the documents where the contract year is greater than 1950" from a stack of, say, 1000 documents. 🔍 In this regard, the retrieval QA chains of "MapReduce" and "Refine" offered by Langchain appear highly intriguing. According to the documentation, these methods can be advantageous when seeking a comprehensive answer that requires distilling key elements from a large corpus of relevant context. 🌟 Instead of merely selecting the top K chunks, these techniques provide an opportunity to synthesize insights from all pertinent sources, thereby enhancing the depth and quality of the final answer. 📚 Harrison Chase's insightful share, "Langchain: Chat with Your Data" on the Deeplearning.ai platform, may be well worth an hour of your time. #GenAI #QuestionAnswering #Langchain #MapReduce #Refine #aiproductmanagement
DLAI - LangChain Chat with Your Data
learn.deeplearning.ai
To view or add a comment, sign in
Senior Advisor to the Experience Advisors
9moI really appreciate using this.