๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers2

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#tokens per second

LLaDA2.1: Speeding Up Text Diffusion via Token Editing

Beginner
Tiwei Bie, Maosong Cao et al.Feb 9arXiv

LLaDA2.1 teaches a diffusion-style language model to write fast rough drafts and then fix its own mistakes by editing tokens it already wrote.

#discrete diffusion language model#editable decoding#token-to-token editing

Fast and Accurate Causal Parallel Decoding using Jacobi Forcing

Intermediate
Lanxiang Hu, Siqi Kou et al.Dec 16arXiv

Autoregressive (AR) models normally write one token at a time, which is accurate but slow for long answers.

#Jacobi Forcing#Jacobi decoding#consistency distillation