🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers10

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#latent reasoning

LaSER: Internalizing Explicit Reasoning into Latent Space for Dense Retrieval

Intermediate
Jiajie Jin, Yanzhao Zhang et al.Mar 2arXiv

LaSER teaches a fast search model to “think” quietly inside its hidden space, so it gets the benefits of step-by-step reasoning without writing those steps out as text.

#dense retrieval#chain-of-thought#latent reasoning

ManCAR: Manifold-Constrained Latent Reasoning with Adaptive Test-Time Computation for Sequential Recommendation

Intermediate
Kun Yang, Yuxuan Zhu et al.Feb 23arXiv

ManCAR helps recommendation systems think step by step but keeps their thoughts on realistic paths using a map of how items connect.

#sequential recommendation#latent reasoning#interaction graph

ThinkRouter: Efficient Reasoning via Routing Thinking between Latent and Discrete Spaces

Beginner
Xin Xu, Tong Yu et al.Feb 12arXiv

ThinkRouter teaches a model to switch how it “thinks” based on how sure it feels, so it stays accurate without talking forever.

#latent reasoning#discrete token space#confidence-aware routing

Show, Don't Tell: Morphing Latent Reasoning into Image Generation

Intermediate
Harold Haodong Chen, Xinxiang Yin et al.Feb 2arXiv

LatentMorph teaches an image-making AI to quietly think in its head while it draws, instead of stopping to write out its thoughts in words.

#latent reasoning#text-to-image generation#autoregressive models

Research on World Models Is Not Merely Injecting World Knowledge into Specific Tasks

Intermediate
Bohan Zeng, Kaixin Zhu et al.Feb 2arXiv

This paper argues that true world models are not just sprinkling facts into single tasks, but building a unified system that can see, think, remember, act, and generate across many situations.

#world models#unified framework#multimodal reasoning

Beyond Imitation: Reinforcement Learning for Active Latent Planning

Intermediate
Zhi Zheng, Wee Sun LeeJan 29arXiv

The paper shows how to make AI think faster and smarter by planning in a hidden space instead of writing long step-by-step sentences.

#latent reasoning#chain-of-thought#variational autoencoder

LaViT: Aligning Latent Visual Thoughts for Multi-modal Reasoning

Intermediate
Linquan Wu, Tianxiang Jiang et al.Jan 15arXiv

LaViT is a new way to teach smaller vision-language models to look at the right parts of an image before they speak.

#multimodal reasoning#visual attention#knowledge distillation

Fast-ThinkAct: Efficient Vision-Language-Action Reasoning via Verbalizable Latent Planning

Intermediate
Chi-Pin Huang, Yunze Man et al.Jan 14arXiv

Fast-ThinkAct teaches a robot to plan with a few tiny hidden "thought tokens" instead of long paragraphs, making it much faster while staying smart.

#Vision-Language-Action#latent reasoning#verbalizable planning

Large Reasoning Models Are (Not Yet) Multilingual Latent Reasoners

Beginner
Yihong Liu, Raoyuan Zhao et al.Jan 6arXiv

Large reasoning models can often find the right math answer in their “head” before finishing their written steps, but this works best in languages with lots of training data like English and Chinese.

#latent reasoning#chain-of-thought#multilingual LLMs

Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space

Intermediate
Xingwei Qu, Shaowen Wang et al.Dec 31arXiv

Language is lumpy: easy stretches and tricky jumps are mixed together, but old models spend the same effort on every word.

#Dynamic Large Concept Models#semantic boundaries#latent reasoning