Paul Golding’s Post

View profile for Paul Golding, graphic

Hands-on R&D Multidisciplinary AI Leader | 30 patents in AI/ML | Enterprise AI | AI Chip Design | Quantum AI

Does Fine-tuning Increase Hallucinations? Perhaps... From this paper: https://lnkd.in/gMZUUwVg "We demonstrate that large language models struggle to acquire new factual knowledge through fine-tuning, as fine-tuning examples that introduce new knowledge are learned significantly slower than those consistent with the model's knowledge." Okay, this is what we expect, more or less. And then there's this: 👉 "However, we also find that as the examples with new knowledge are eventually learned, they linearly increase the model's tendency to hallucinate" Uh-oh - fine-tuning can increase hallucinations! What's the takeaway? Anyone paying attention to LLMs should realize by now that your evaluation strategy and effective search for an optimal solution is vital to success. Following a formula, like "RAG + Fine-tuning = My-domain LLM" is merely a starting point. Experiments have consistently shown that "out of the box", such formulations can perform very badly even though they give the impression, upon initial inspection, of being incredibly powerful. Don't be fooled by "demo wow-factor-ness"!

  • No alternative text description for this image

To view or add a comment, sign in

Explore topics