The paper studies why large language models (LLMs) sound too sure of themselves when using retrieval-augmented generation (RAG) and how to fix it.
This paper introduces YaPO, a way to gently nudge a language model’s hidden thoughts so it behaves better without retraining it.
This paper introduces Laser, a new way for vision-language models to think in their hidden space before speaking, so they see the whole “forest” before picking out the “trees.”