This paper shows that comics (multi-panel pictures with words) can help AI think through problems step by step, just like a student explains their work.
Diffusion language models (dLLMs) generate several tokens at once but usually throw away lots of helpful clues each step—RCD keeps and reuses those clues.
Big models are often used to grade AI answers, but they are expensive, slow, and depend too much on tricky prompts.
Traditional supervised fine-tuning (SFT) makes a model copy one answer too exactly, which can cause overfitting to the exact wording instead of the real idea.
Diffusion Language Models (DLMs) write by polishing whole sentences in several passes instead of one token at a time.