LLaDA-o is a new AI that understands pictures and text and can also make images, all in one model.
Text-to-image models can make pretty pictures but still miss details in complex prompts, like counts, positions, or exact text.
Mobile-O is a small but smart AI that can both understand pictures and make new images, and it runs right on your phone.
PixelGen is a new image generator that works directly with pixels and uses what-looks-good-to-people guidance (perceptual loss) to improve quality.
PromptRL teaches a language model to rewrite prompts while a flow-based image model learns to draw, and both are trained together using the same rewards.
DenseGRPO teaches image models using lots of small, timely rewards instead of one final score at the end.
Before this work, most text-to-image models used VAEs (small, squished image codes) and struggled with slow training and overfitting on high-quality fine-tuning sets.
Diffusion models make pictures from noise but often miss what people actually want in the prompt or what looks good to humans.
This paper turns a video model into a step-by-step visual thinker that makes one final, high-quality picture from a text prompt.
StageVAR makes image-generating AI much faster by recognizing that early steps set the meaning and structure, while later steps just polish details.
Big text-to-image models make amazing pictures but are slow because they take hundreds of tiny steps to turn noise into an image.
MetaCanvas lets a multimodal language model (MLLM) sketch a plan inside the generator’s hidden canvas so diffusion models can follow it patch by patch.