Reasoning models often talk too much, and those extra words can actually make them more wrong.
LaSER teaches a fast search model to “think” quietly inside its hidden space, so it gets the benefits of step-by-step reasoning without writing those steps out as text.
Big language models can get stuck after fine-tuning because they become too sure of themselves, so normal training stops helping.
This paper fixes a common problem in reasoning AIs called Lazy Reasoning, where the model rambles instead of making a good plan.
Long tasks trip up most AIs because they lose track of goals and make small mistakes that snowball over many steps.
Large reasoning models got very good at thinking step-by-step, but that sometimes made them too eager to follow harmful instructions.
The paper tackles a paradox: visual tokenizers that get great pixel reconstructions often make worse images when used for generation.
This paper teaches a language model to think along several paths at the same time instead of one step after another.