At the DSC conference, our commitment lies in delivering the forefront of data science advancements and trends. Now, we proudly present the talks from DSC ADRIA 24, showcasing insights from Data & AI professionals who are leading the way in the field, offering invaluable perspectives. Today we’re sharing Catalin Hanga, PhD’s talk: RAG: Bridging the Gap between Information Retrieval & NLG In his talk, Catalin delved into Retrieval Augmented Generation (RAG), discussing its theoretical foundations, practical applications, and technical challenges, and exploring its potential to enhance LLM by leveraging pre-existing knowledge from external corpora, with a focus on its impact on document summarization and database management tasks. This speech by Catalin Hanga was held on May 23rd at DSC Adria 2024 in Zagreb. Don't miss next year's DSC ADRIA 25 – it's set to be the biggest AI&Data conference in Croatia! 🤫 For the entire video just click the link below https://lnkd.in/drapsbM5 #ai #datascience #ml #dscadria #zagreb #RAG #NLG #data #tech
DSC ADRIA’s Post
More Relevant Posts
-
If you're seeking valuable insights and innovative ideas in data science, we have something special for you! Whether you missed the event or just want more, DSC ADRIA 24 talks are here for you! Today we’re sharing Miloš Košprdić’s talk: Verif.ai: A Trustworthy Scientific Generative Search Engine Milos Kosprdic presents Verif.ai, an innovative open-source scientific question-answering system designed to deliver referenced and verifiable answers. This pioneering system employs a dual-stage process, combining robust information retrieval techniques with fine-tuned generative models and advanced verification engines to ensure answer accuracy and reliability. By integrating semantic and lexical search techniques over scientific papers and utilizing a Retrieval-Augmented Generation (RAG) module, Verif.ai generates claims with references to source materials, bolstering answer credibility. Additionally, a Verification Engine cross-checks generated claims against original articles, identifying any potential inaccuracies. This speech by Milos Kosprdic was held on May 23rd at DSC Adria 2024 in Zagreb. Don't miss next year's DSC ADRIA 25 – it's set to be the biggest AI&Data conference in Croatia! 🤫 For the entire video just click the link below https://lnkd.in/dyKUrqXD #ai #datascience #generative #search #engine #verifAI #ml #dscadria
Verif.ai: A Trustworthy Scientific Generative Search Engine | Milos Kosprdic | DSC ADRIA 24
https://www.youtube.com/
To view or add a comment, sign in
-
Our great presenter Miloš discusses the Verif.ai project - a system uniquely designed for question-answering with verification in the biomedical field.
If you're seeking valuable insights and innovative ideas in data science, we have something special for you! Whether you missed the event or just want more, DSC ADRIA 24 talks are here for you! Today we’re sharing Miloš Košprdić’s talk: Verif.ai: A Trustworthy Scientific Generative Search Engine Milos Kosprdic presents Verif.ai, an innovative open-source scientific question-answering system designed to deliver referenced and verifiable answers. This pioneering system employs a dual-stage process, combining robust information retrieval techniques with fine-tuned generative models and advanced verification engines to ensure answer accuracy and reliability. By integrating semantic and lexical search techniques over scientific papers and utilizing a Retrieval-Augmented Generation (RAG) module, Verif.ai generates claims with references to source materials, bolstering answer credibility. Additionally, a Verification Engine cross-checks generated claims against original articles, identifying any potential inaccuracies. This speech by Milos Kosprdic was held on May 23rd at DSC Adria 2024 in Zagreb. Don't miss next year's DSC ADRIA 25 – it's set to be the biggest AI&Data conference in Croatia! 🤫 For the entire video just click the link below https://lnkd.in/dyKUrqXD #ai #datascience #generative #search #engine #verifAI #ml #dscadria
Verif.ai: A Trustworthy Scientific Generative Search Engine | Milos Kosprdic | DSC ADRIA 24
https://www.youtube.com/
To view or add a comment, sign in
-
𝗥𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝗼𝗻-𝟳𝟬𝗕 𝘂𝗻𝗱𝗲𝗿 𝗶𝗻𝘃𝗲𝘀𝘁𝗶𝗴𝗮𝘁𝗶𝗼𝗻 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆! ⚙️ I've posted last week about Reflection-70B, a model with a crazy performance for its relatively small size beating the behemoths GPT-4o and Claude-3.5 Sonnet. But this whole thing is now under serious scrutiny. Already end of last week, many people pointed out that this Reflection tuning method was similar to fine-tuning a model to perform chain-of-thought (CoT). To me, this is not that much of an issue : if you can effectively fine-tune a model to perform better on all tasks without needing a hack like adding a CoT suffix to your prompt, it means your model did really become stronger. And I'm a big believer in adding more tokens for reflection anyway: for now LLMs don't use nearly enough. But 𝘀𝗲𝗿𝗶𝗼𝘂𝘀 𝗱𝗼𝘂𝗯𝘁𝘀 𝗼𝗻 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝗵𝗮𝘃𝗲 𝗲𝗺𝗲𝗿𝗴𝗲𝗱 𝘀𝗶𝗻𝗰𝗲 𝘁𝗵𝗶𝘀 𝗺𝗼𝗿𝗻𝗶𝗻𝗴: 🤔 First, company Artificial Analysis posted their independent evaluation of the model weights posted on HF. It turns out the model's performance is under Llama-3-70B, so far from the author's claims. ➡️ Matt Shumer provided an API access to the model for people to try out. But people have noticed strange behaviours + it's hard to really test a model through an API that can change any time! 👉 Others have shown since that the model weights on HF could be a LoRA fine-tune of Llama-3-70B ⏱️ Matt Shumer and team have said that they're investigating the model weights to re-upload better ones on HF: Let's wait for the results to draw a conclusion! Anyway, this advocates for even more transparency in LLM releases and rigorous evaluation! https://lnkd.in/eKC8fQDm
To view or add a comment, sign in
-
The #LLM landscape has changed a lot since Galileo launched the first Hallucination Index in November 2023, with larger, more powerful open and closed-sourced models being announced monthly. Since then, two things happened: the term #hallucinate became Dictionary.com’s Word of the Year, and Retrieval-Augmented-Generation (#RAG) has become one of the leading methods for building AI solutions. Galileo new index evaluates how well 22 of the leading models adhere to given context, helping #developers make informed decisions about balancing price and performance. The report highlights 4 trends: 1. #Anthropic outperforms OpenAl: during testing, Anthropic's latest Claude 3.5 Sonnet and Claude 3 Opus consistently scored close to perfect scores, beating out GPT-4o and GPT-3.5, especially in shorter context scenarios. 2. Larger is not always better: in certain cases, #smallermodels outperformed larger models. 3. Performance is not impacted by #context lenght: models perform particularly well with extended context lengths without losing #quality or #accuracy, reflecting how far model training and architecture has come. 4. Open source is closing the gap: while closed-source models still offer the best performance thanks to proprietary training data, open-source models like #Llama continue to improve in #hallucinationperformance without the cost barriers of their close-sourced counterparts. If you want to dive deep into the results of this research you can access the report and executive summary at this link: https://lnkd.in/ggJhF7cx
LLM Hallucination Index RAG Special - Galileo - Galileo
rungalileo.io
To view or add a comment, sign in
-
And this is what an LLM looks like on the inside (at least the first layer of it). Not so magical now huh… If you wanna see how the embeddings layer works and how #mistral7b and #gemma looks on the inside #genai #llm #ai https://lnkd.in/esrMNB3E
Inside the LLM: Visualizing the Embeddings Layer of Mistral-7B and Gemma-2B
https://www.youtube.com/
To view or add a comment, sign in
-
I finish the "Building and Evaluating Advanced RAG" course provided by DeepLearning.AI LlamaIndex Jerry Liu TruEra Anupam Datta. I learn new retrieval concept (sentence-window and auto merge) which may help to increase RAG performance. Both of them are trying to help us improve our context quality. so we can provide better context before fit into prompt to LLM within RAG pipeline. TruEra[1] is a new evaluation tool which allow you to measure your performance within each steps of RAG. It looks like very cool. I definitely will explore more within their document! - Context relevance: Is the retrieved context relevant to the query? - Answer Relevance: if the response relevant to the query? - Groundedness: Is the response supported by the context? [1] https://lnkd.in/gP5DFrfE https://lnkd.in/gJmnK8Jf #llamaindex #deeplearning.ai #truera #LLM #Generativeai #Genai #RAG #evaluation #LLMOps
HSU YU-MING, congratulations on completing Building and Evaluating Advanced RAG!
learn.deeplearning.ai
To view or add a comment, sign in
-
Fun Fact: In 2023, the word “hallucinate” saw an 85% increase in digital media mentions and a 46% rise in dictionary lookups, earning it the title of Dictionary.com’s Word of the Year! 🔭 Galileo, a leader in hallucination detection, recently released the 2nd installation of their Hallucination Index. It ranks 22 leading models based on their performance in real-world scenarios and propensity to hallucinate. The Takeaways: 💸 Open source models continue to improve in hallucination performance without the cost barriers of their closed-sourced counterparts 📝 Models are excelling with longer context lengths, maintaining quality and accuracy 🦐 Smaller models are outperforming larger ones in various tests 🎓 Anthropic’s Claude 3.5 Sonnet and Claude 3 Opus consistently scored close to perfect, beating out OpenAI’s GPT-4 and GPT-3.5 🌍 The global push for high-performing language models is making a noticeable difference If you haven't yet, I highly recommend reading through the full index! https://lnkd.in/dQvju_nu #AI #Hallucination #AIResearch #OpenSource #MachineLearning
LLM Hallucination Index RAG Special - Galileo - Galileo
rungalileo.io
To view or add a comment, sign in
-
🚀 Excited to share that I have completed the "Knowledge Graphs for RAG" course organized by Neo4j and DeepLearning.AI! 🌐📊 This knowledge opens up new possibilities in the integration of knowledge graphs with Retrieval-Augmented Generation (RAG), enhancing the way we process and generate information in AI and ML. 💡 A big thanks to the organizers for an incredible learning experience!
Kacper Urban, congratulations on completing Knowledge Graphs for RAG!
learn.deeplearning.ai
To view or add a comment, sign in
-
How serious are we about AI? Serious enough to get invited to speak at the AI Engineer World's Fair this summer! Our amazing CTO, Emil Sedgh, recently took the stage with renowned AI expert Hamel H. to discuss part of how we've been able to make our AI Copilot grow by leaps and bounds. Watch below!
How to Construct Domain Specific LLM Evaluation Systems: Hamel Husain and Emil Sedgh
https://www.youtube.com/
To view or add a comment, sign in
-
While multicloud environments have become mandatory, complexity is an issue. Read the #Dynatrace state of #observability in 2024 report to see why an AI, analytics, and automation strategy is key.
The state of observability in 2024
content.dynatrace.social
To view or add a comment, sign in
572 followers