Big AI reasoning models often keep thinking long after they already found the right answer, wasting time and tokens.
LLMs can think for many steps, but when they keep every step forever, the extra tokens turn into noise and make answers worse, not better.