🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers6

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#camera control

WorldStereo: Bridging Camera-Guided Video Generation and Scene Reconstruction via 3D Geometric Memories

Intermediate
Yisu Zhang, Chenjie Cao et al.Mar 2arXiv

WorldStereo is a method that turns a single photo (or a panorama) into a short set of camera-guided videos and then reconstructs a consistent 3D scene from them.

#video diffusion models#camera control#3D reconstruction

Generated Reality: Human-centric World Simulation using Interactive Video Generation with Hand and Camera Control

Intermediate
Linxi Xie, Lisong C. Sun et al.Feb 20arXiv

This paper builds a "generated reality" system that lets AI-made videos react to your real head and hand movements in VR.

#generated reality#hand pose conditioning#video diffusion transformer

SpaceTimePilot: Generative Rendering of Dynamic Scenes Across Space and Time

Beginner
Zhening Huang, Hyeonho Jeong et al.Dec 31arXiv

SpaceTimePilot is a video AI that lets you steer both where the camera goes (space) and how the action plays (time) from one input video.

#video diffusion#space–time disentanglement#camera control

Animate Any Character in Any World

Intermediate
Yitong Wang, Fangyun Wei et al.Dec 18arXiv

AniX is a system that lets you place any character into any 3D world and control them with plain language, like “run forward” or “play a guitar.”

#AniX#3D Gaussian Splatting#world models

Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance

Intermediate
Ruihang Chu, Yefei He et al.Dec 9arXiv

Wan-Move is a new way to control how things move in AI-generated videos by guiding motion directly inside the model’s hidden features.

#motion-controllable video generation#latent trajectory guidance#point trajectories

EgoX: Egocentric Video Generation from a Single Exocentric Video

Intermediate
Taewoong Kang, Kinam Kim et al.Dec 9arXiv

EgoX turns a regular third-person video into a first-person video that looks like it was filmed from the actor’s eyes.

#egocentric video generation#exocentric to egocentric#video diffusion models