๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers4

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#physical plausibility

ArtHOI: Articulated Human-Object Interaction Synthesis by 4D Reconstruction from Video Priors

Beginner
Zihao Huang, Tianqi Liu et al.Mar 4arXiv

ArtHOI is a new zero-shot method that makes people and everyday articulated objects (like doors, drawers, and fridges) move together realistically using only a single generated video as guidance.

#articulated human-object interaction#4D reconstruction#optical flow segmentation

Self-Refining Video Sampling

Intermediate
Sangwon Jang, Taekyung Ki et al.Jan 26arXiv

This paper shows how a video generator can improve its own videos during sampling, without extra training or outside checkers.

#video generation#flow matching#denoising autoencoder

Rethinking Video Generation Model for the Embodied World

Beginner
Yufan Deng, Zilin Pan et al.Jan 21arXiv

Robots need videos that not only look pretty but also follow real-world physics and finish the task asked of them.

#robot video generation#embodied AI#benchmark

Active Intelligence in Video Avatars via Closed-loop World Modeling

Intermediate
Xuanhua He, Tianyu Yang et al.Dec 23arXiv

The paper turns video avatars from passive puppets into active doers that can plan, act, check their own work, and fix mistakes over many steps.

#ORCA#L-IVA#Internal World Model