🎓How I Study AIHISA
đź“–Read
📄Papers📰Blogs🎬Courses
đź’ˇLearn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers38

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#Chain-of-Thought

Mechanistic Interpretability of Large-Scale Counting in LLMs through a System-2 Strategy

Intermediate
Hosein Hasani, Mohammadali Banayeeanzade et al.Jan 6arXiv

Large language models (LLMs) are good at many math problems but often mess up simple counting when the list gets long.

#mechanistic interpretability#counting in LLMs#System-2 prompting

Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling

Beginner
Falcon LLM Team, Iheb Chaabane et al.Jan 5arXiv

Falcon-H1R is a small (7B) AI model that thinks really well without needing giant computers.

#Falcon-H1R#Hybrid Transformer-Mamba#Chain-of-Thought

NextFlow: Unified Sequential Modeling Activates Multimodal Understanding and Generation

Intermediate
Huichao Zhang, Liao Qu et al.Jan 5arXiv

NextFlow is a single, decoder-only Transformer that can read and write both text and images in one continuous sequence.

#Next-Scale Prediction#Autoregressive Transformer#Dual-Codebook Tokenization

Multi-hop Reasoning via Early Knowledge Alignment

Intermediate
Yuxin Wang, Shicheng Fang et al.Dec 23arXiv

This paper adds a tiny but powerful step called Early Knowledge Alignment (EKA) to multi-step retrieval systems so the model takes a quick, smart look at relevant information before it starts planning.

#Retrieval-Augmented Generation#Iterative RAG#Multi-hop Reasoning

Schoenfeld's Anatomy of Mathematical Reasoning by Language Models

Intermediate
Ming Li, Chenrui Fan et al.Dec 23arXiv

This paper turns messy chains of thought from language models into clear, named steps so we can see how they really think through math problems.

#Schoenfeld’s Episode Theory#Cognitive Episodes#ThinkARM

When Reasoning Meets Its Laws

Intermediate
Junyu Zhang, Yifan Sun et al.Dec 19arXiv

The paper proposes the Laws of Reasoning (LORE), simple rules that say how much a model should think and how accurate it can be as problems get harder.

#Large Reasoning Models#Laws of Reasoning#Compute Law

Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future

Intermediate
Tianshuai Hu, Xiaolu Liu et al.Dec 18arXiv

Traditional self-driving used separate boxes for seeing, thinking, and acting, but tiny mistakes in early boxes could snowball into big problems later.

#Vision-Language-Action#End-to-End Autonomous Driving#Dual-System VLA

An Anatomy of Vision-Language-Action Models: From Modules to Milestones and Challenges

Intermediate
Chao Xu, Suyu Zhang et al.Dec 12arXiv

Vision-Language-Action (VLA) models are robots’ “see–think–do” brains that connect cameras (vision), words (language), and motors (action).

#Vision-Language-Action#Embodied AI#Multimodal Alignment

OPV: Outcome-based Process Verifier for Efficient Long Chain-of-Thought Verification

Intermediate
Zijian Wu, Lingkai Kong et al.Dec 11arXiv

Big AI models often write very long step-by-step solutions, but usual checkers either only check the final answer or get lost in the long steps.

#Outcome-based Process Verifier#Chain-of-Thought#Process Verification

Long-horizon Reasoning Agent for Olympiad-Level Mathematical Problem Solving

Intermediate
Songyang Gao, Yuzhe Gu et al.Dec 11arXiv

This paper builds a math problem–solving agent, Intern-S1-MO, that thinks in multiple rounds and remembers proven mini-results called lemmas so it can solve very long, Olympiad-level problems.

#long-horizon reasoning#lemma-based memory#multi-agent reasoning

LEO-RobotAgent: A General-purpose Robotic Agent for Language-driven Embodied Operator

Intermediate
Lihuang Chen, Xiangyu Luo et al.Dec 11arXiv

LEO-RobotAgent is a simple but powerful framework that lets a language model think, plan, and operate many kinds of robots using natural language.

#LEO-RobotAgent#language-driven robotics#LLM agent

SWAA: Sliding Window Attention Adaptation for Efficient Long-Context LLMs Without Pretraining

Intermediate
Yijiong Yu, Jiale Liu et al.Dec 11arXiv

Long texts make standard attention in large language models very slow because it checks every word against every other word.

#Sliding Window Attention#SWAA#FA Decode
1234