dLLM is a single, open-source toolbox that standardizes how diffusion language models are trained, run, and tested.
Before this work, most big language models talked one word at a time (autoregressive), which made them slow and hard to parallelize.