πŸŽ“How I Study AIHISA
πŸ“–Read
πŸ“„PapersπŸ“°Blogs🎬Courses
πŸ’‘Learn
πŸ›€οΈPathsπŸ“šTopicsπŸ’‘Concepts🎴Shorts
🎯Practice
πŸ“Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers4

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#Robot Manipulation

Chain of World: World Model Thinking in Latent Motion

Intermediate
Fuxiang Yang, Donglin Di et al.Mar 3arXiv

Robots learn better when they think about how things move over time, not by redrawing every pixel of a video.

#Vision-Language-Action#World Model#Latent Motion

GigaBrain-0.5M*: a VLA That Learns From World Model-Based Reinforcement Learning

Intermediate
GigaBrain Team, Boyuan Wang et al.Feb 12arXiv

GigaBrain-0.5M* is a robot brain that sees, reads, and acts, and it gets smarter by imagining the future before moving.

#Vision-Language-Action#World Model#Reinforcement Learning

RISE: Self-Improving Robot Policy with Compositional World Model

Intermediate
Jiazhi Yang, Kunyang Lin et al.Feb 11arXiv

RISE lets a robot learn safely and cheaply by practicing in its imagination instead of always in the real world.

#Reinforcement Learning#World Models#Compositional World Model

VLSA: Vision-Language-Action Models with Plug-and-Play Safety Constraint Layer

Intermediate
Songqiao Hu, Zeyi Liu et al.Dec 9arXiv

Robots that follow pictures and words (VLA models) can do many tasks, but they often bump into things because safety isn’t guaranteed.

#Vision-Language-Action#Safety Constraint Layer#Control Barrier Function