Parallel-Probe is a simple add-on that lets many AI “thought paths” think at once but stop early when they already agree.
TTCS is a way for a model to teach itself during the test by first making easier practice questions that are similar to the real hard question and then learning from them.
This paper studies how sure (confident) large language models are during multi-turn chats where clues arrive step by step.
The paper introduces UCoder, a way to teach a code-generating AI to get better without using any outside datasets, not even unlabeled code.
Reasoning Palette gives a language or vision-language model a tiny hidden “mood” (a latent code) before it starts answering, so it chooses a smarter plan rather than just rolling dice on each next word.
ThreadWeaver teaches a language model to split big problems into smaller parts it can solve at the same time, like teammates working in parallel.