🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
Search
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers4

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#camera control

SpaceTimePilot: Generative Rendering of Dynamic Scenes Across Space and Time

Beginner
Zhening Huang, Hyeonho Jeong et al.Dec 31arXiv

SpaceTimePilot is a video AI that lets you steer both where the camera goes (space) and how the action plays (time) from one input video.

#video diffusion#space–time disentanglement#camera control

Animate Any Character in Any World

Intermediate
Yitong Wang, Fangyun Wei et al.Dec 18arXiv

AniX is a system that lets you place any character into any 3D world and control them with plain language, like “run forward” or “play a guitar.”

#AniX#3D Gaussian Splatting#world models

Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance

Intermediate
Ruihang Chu, Yefei He et al.Dec 9arXiv

Wan-Move is a new way to control how things move in AI-generated videos by guiding motion directly inside the model’s hidden features.

#motion-controllable video generation#latent trajectory guidance#point trajectories

EgoX: Egocentric Video Generation from a Single Exocentric Video

Intermediate
Taewoong Kang, Kinam Kim et al.Dec 9arXiv

EgoX turns a regular third-person video into a first-person video that looks like it was filmed from the actor’s eyes.

#egocentric video generation#exocentric to egocentric#video diffusion models