Zelzin M.’s Post

This tutorial by Vipula Rawte and Aman Chadha aims to bridge this knowledge gap by introducing the emerging realm of hallucination in LLMs. It comprehensively explores the key aspects of hallucination, including benchmarking, detection, and mitigation techniques.

LREC-COLING 2024 Tutorial: Hallucination in Large Language Models

LREC-COLING 2024 Tutorial: Hallucination in Large Language Models

vr25.github.io

Aman Chadha

GenAI Leadership @ AWS • Stanford AI • Ex-, Amazon Alexa, Nvidia, Qualcomm • EB-1 "Einstein Visa" Recipient/Mentor • EMNLP 2023 Outstanding Paper Award

5mo

Thanks for the kind words! Apart from Vipula and I, Prof. Amitava Das was the brains behind the tutorial, credits to him :)

Aniel Villegas

Sr. Data Scientist / Technical Lead

5mo

Gracias Zel!

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics