⭐️ New video release 📺: RAG for a medical company: the technical and product challenges Watch Noé Achache discuss the technical and product challenges of building a performant #RAG for a medical company, focusing on leveraging #LLMs and enhancing retrieval and generation metrics to bring value to users in the health sector. 📺 Watch the video on YouTube: https://lnkd.in/eb4xXUWk @Noé Achache, the GenAI leader and Lead Data Scientist at Theodo Data & AI (ex-SICARA), shared insights on the technical and product challenges faced while developing a Retrieval Augmented Generation (RAG) system for a medical company. The talk highlighted the complexities of leveraging tools like Chainlit, Qdrant, and Langsmith to enable doctors to query drug documentation with natural language. The discussion focused on enhancing retrieval and generation metrics through the strategic use of Large Language Models (LLMs), despite the inherent limitations of accuracy in the healthcare sector. By incorporating sources directly into generated answers and utilizing tools like Langsmith for logging and dataset augmentation, the team ensured user trust and interaction correctness. The session underscored the importance of technical improvements and product design to create a performant RAG that delivers value to users while addressing challenges unique to the medical domain.
PyCon DE & PyData’s Post
More Relevant Posts
-
And here's the second part of my blog series on Understanding MAB algorithms. In this one, we focus on the applications of MAB in recommendation systems, clinical trials and so much more. Would love to hear your feedback! https://lnkd.in/d_MEx9A3
To view or add a comment, sign in
-
The use of collective intelligence methods to synthesize differential diagnoses, combining the responses of different LLMs, achieves three of the necessary steps toward advancing LLMs as a diagnostic support tool: demonstrating sufficiently high diagnostic accuracy, reducing the risk of misdiagnoses, and eliminating the dependence on a single commercial vendor. A promising way to augment doctor’s capacity to shorten the diagnostic gap and reduce the patient odissey!
Combining Multiple Large Language Models Improves Diagnostic Accuracy
ai.nejm.org
To view or add a comment, sign in
-
Loving the conversational interface (and clear references) that OpenEvidence provides. Take a look when you get a chance. https://lnkd.in/deHU4pyy
OpenEvidence
openevidence.com
To view or add a comment, sign in
-
LabLinks is here to revolutionise your scientific workflows🔬 At LabLinks, we are at the forefront of digital transformation in the laboratory. Our mission is to streamline and enhance your lab processes, ensuring precision, efficiency, and innovation every step of the way. Here’s some of the digital innovation we offer: 🚀 Scientific Web Apps & Tools 🧬 Oligonucleotides Workflow Tracking 🤝 ELN System Integration 📈 Chromatography Data System Monitoring 🧪 LC Eluent Level Monitoring Let’s work together to push the boundaries of what’s possible in science and research. Simply let us know what you need, and we'll create user-friendly digital solutions that enable seamless data analysis, enhance collaboration, and simplify delivery. Discover more about what we do here and let's start the conversation: https://lnkd.in/ejGYx-ug #LabLinks #LaboratoryManagement #DigitalTransformation #InnovationInScience #ELN #DataAnalytics #Oligonucleotides #DigitalInnovation #Chromatography #ScientificResearch
What We Do | Lablinks
lablinks.io
To view or add a comment, sign in
-
There are standardized evaluation frameworks that are of huge help to general-purpose foundation models such as large language (LLMs). For example, we have standardized tests like MMLU (multiple-choice questions that cover 57 disciplines like math, philosophy, and medicine) and HumanEval (testing code generation). Building these assessment tools is a great deal of work and requires expertise. Such evaluation frameworks are neither scalable nor perfect but they serve an invaluable role in giving LLM users a quantitative sense of a model’s performance In contrast, our current options for evaluating custom applications built using LLMs are far more limited. Here, I see three major areas for improvement: (a) defining the queries, (b) defining the metrics, and finally (c) defining the ground truth. As a result of implementing the evaluation framework, you will be able to quantify the performance of your custom LLM application along with the metrics that you understand to be most important. It surely sets the stage to identify what metrics to improve and how to do it. Enjoy this great resource. Link below. #AI #LLM #Technology
Papers with Code - Medical
paperswithcode.com
To view or add a comment, sign in
-
- Good Analogy, got it from a publication from Govtech. "Why RAG? Fine-tuning (FT) and Retrieval-Augmented Generation (RAG) are two primary methods for customising large language models to domain-specific use cases. Consider the following analogy, where we have two individuals who want to be doctors. The first individual has deeply studied and practised medicine over an extended period, gradually building up significant internalised knowledge. This is analogous to fine-tuning, where we train a model to specialise in domain-specific data given sufficient time and resources. The second individual is significantly less experienced than the first, but he has the uncanny ability to reference any information source accurately and fast, granting him quick access to a wide range of medical information. This is analogous to RAG, where we have a general-purpose LLM with access to a comprehensive knowledge base.""
To view or add a comment, sign in
-
When using GPT (Generative Pre-trained Transformer) technology, such as Drug-GPT, it is crucial to ask the right questions, in the right way to get the best results. We asked our Data and Intelligence Expert - Roma English Owen - what her top tip would be when using Drug-GPT. Check out her advice below. To learn more about Drug-GPT, check out our website - https://hubs.la/Q02Jx8Tp0
To view or add a comment, sign in
-
Here’s a broader RAG. Built on 3.5 and feels too general. A specialty focused approach might get more traction. ClinicalRAG: Enhancing Clinical Decision Support through Heterogeneous Knowledge Retrieval https://lnkd.in/gs2DkwwE
2024.knowllm-1.6.pdf
aclanthology.org
To view or add a comment, sign in
-
Figuring out the "best" approximation of an event's date when navigating conflicting documentation is hard. Take this week's challenge and see how you did. 1) How does the Precision Matrix resolve conflicting dates within medical records? 2) What factors influence the reliability of temporal objects used to determine event dates? 3) Why are fully defined time/dates considered the most accurately recorded temporal points? 4) How can we address the reliability of input sources and data provenance? This week's post might help: https://lnkd.in/gZMZYDXT
Navigating Precision: Unraveling Conflicting Dates in Medical Records
timespacemedicine.wixsite.com
To view or add a comment, sign in
-
How searchable is your data? Is getting the right information taking longer than it should? It may be time to supercharge your InterSystems IRIS database with vector search; a new tool from InterSystems. By establishing vector embeddings, users can enhance the functionality of the software and develop new AI-based applications. Read more about our new offering in Scott Gnau's new blog here: https://lnkd.in/ebbuqF6e
InterSystems expands the InterSystems IRIS data platform with Vector Search to support next-generation AI applications
intersystems.com
To view or add a comment, sign in
2,597 followers