πŸŽ“How I Study AIHISA
πŸ“–Read
πŸ“„PapersπŸ“°Blogs🎬Courses
πŸ’‘Learn
πŸ›€οΈPathsπŸ“šTopicsπŸ’‘Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
Search
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers3

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#self-forcing

LIVE: Long-horizon Interactive Video World Modeling

Intermediate
Junchao Huang, Ziyang Ye et al.Feb 3arXiv

LIVE is a new way to train video-making AIs so their mistakes don’t snowball over long videos.

#cycle consistency#autoregressive video diffusion#exposure bias

FlowAct-R1: Towards Interactive Humanoid Video Generation

Intermediate
Lizhen Wang, Yongming Zhu et al.Jan 15arXiv

FlowAct-R1 is a new system that makes lifelike human videos in real time, so the on-screen person can react quickly as you talk to them.

#interactive humanoid video#real-time streaming generation#temporal consistency

Yume-1.5: A Text-Controlled Interactive World Generation Model

Intermediate
Xiaofeng Mao, Zhen Li et al.Dec 26arXiv

Yume1.5 is a model that turns text or a single image into a living, explorable video world you can move through with keyboard keys.

#interactive world generation#video diffusion#temporal-spatial-channel modeling