Long AI tasks can go wrong early and keep getting worse, like a snowball of mistakes called the Spiral of Hallucination.
This paper teaches AI models not just how to solve problems but also how to tell when their own answers might be wrong.