Diffusion language models (dLLMs) generate several tokens at once but usually throw away lots of helpful clues each stepβRCD keeps and reuses those clues.
The paper shows that changing the language a model 'thinks in' (its language of thought) can make its English answers more varied without making them much worse in quality.