This survey turns model understanding into a step-by-step repair toolkit called Locate, Steer, and Improve.
Language models can act like many characters, but they usually aim to be a helpful Assistant after post-training.
Large language models (LLMs) are good at many math problems but often mess up simple counting when the list gets long.
This paper shows a new way (called RISE) to find and control how AI models think without needing any human-made labels.
Large language models (LLMs) donβt act as a single brain; inside, each layer and module quietly makes its own mini-decisions called internal policies.