🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers21

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#Vision-Language-Action

Fast-ThinkAct: Efficient Vision-Language-Action Reasoning via Verbalizable Latent Planning

Intermediate
Chi-Pin Huang, Yunze Man et al.Jan 14arXiv

Fast-ThinkAct teaches a robot to plan with a few tiny hidden "thought tokens" instead of long paragraphs, making it much faster while staying smart.

#Vision-Language-Action#latent reasoning#verbalizable planning

VLingNav: Embodied Navigation with Adaptive Reasoning and Visual-Assisted Linguistic Memory

Intermediate
Shaoan Wang, Yuanfei Luo et al.Jan 13arXiv

VLingNav is a robot navigation system that sees, reads instructions, and acts, while deciding when to think hard and when to just move.

#Vision-Language-Action#embodied navigation#adaptive chain-of-thought

Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future

Intermediate
Tianshuai Hu, Xiaolu Liu et al.Dec 18arXiv

Traditional self-driving used separate boxes for seeing, thinking, and acting, but tiny mistakes in early boxes could snowball into big problems later.

#Vision-Language-Action#End-to-End Autonomous Driving#Dual-System VLA

Spatial-Aware VLA Pretraining through Visual-Physical Alignment from Human Videos

Intermediate
Yicheng Feng, Wanpeng Zhang et al.Dec 15arXiv

Robots often see the world as flat pictures but must move in a 3D world, which makes accurate actions hard.

#Vision-Language-Action#3D spatial grounding#visual-physical alignment

DrivePI: Spatial-aware 4D MLLM for Unified Autonomous Driving Understanding, Perception, Prediction and Planning

Intermediate
Zhe Liu, Runhui Huang et al.Dec 14arXiv

DrivePI is a single, small (0.5B) multimodal language model that sees with cameras and LiDAR, talks in natural language, and plans driving actions all at once.

#DrivePI#Vision-Language-Action#3D occupancy

An Anatomy of Vision-Language-Action Models: From Modules to Milestones and Challenges

Intermediate
Chao Xu, Suyu Zhang et al.Dec 12arXiv

Vision-Language-Action (VLA) models are robots’ “see–think–do” brains that connect cameras (vision), words (language), and motors (action).

#Vision-Language-Action#Embodied AI#Multimodal Alignment

Openpi Comet: Competition Solution For 2025 BEHAVIOR Challenge

Intermediate
Junjie Bai, Yu-Wei Chao et al.Dec 10arXiv

This paper shows how to make home-helper robots better at long, multi-step chores by smart training on diverse tasks and by polishing the model after training using its own best attempts.

#Vision-Language-Action#long-horizon manipulation#rejection sampling fine-tuning

HiF-VLA: Hindsight, Insight and Foresight through Motion Representation for Vision-Language-Action Models

Intermediate
Minghui Lin, Pengxiang Ding et al.Dec 10arXiv

Robots often act like goldfish with short memories; HiF-VLA fixes this by letting them use motion to remember the past and predict the future.

#Vision-Language-Action#motion vectors#temporal reasoning

VLSA: Vision-Language-Action Models with Plug-and-Play Safety Constraint Layer

Intermediate
Songqiao Hu, Zeyi Liu et al.Dec 9arXiv

Robots that follow pictures and words (VLA models) can do many tasks, but they often bump into things because safety isn’t guaranteed.

#Vision-Language-Action#Safety Constraint Layer#Control Barrier Function
12