Big language models can get stuck after fine-tuning because they become too sure of themselves, so normal training stops helping.
The paper builds a simple, math-light rule to predict whether training makes a language model more open-minded (higher entropy) or more sure of itself (lower entropy).