🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
Search
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers6

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#Pass@k

Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability

Intermediate
Xiao Liang, Zhong-Zhi Li et al.Feb 2arXiv

The paper trains language models to solve hard problems by first breaking them into smaller parts and then solving those parts, instead of only thinking in one long chain.

#divide-and-conquer reasoning#chain-of-thought#reinforcement learning

Adaptive Ability Decomposing for Unlocking Large Reasoning Model Effective Reinforcement Learning

Intermediate
Zhipeng Chen, Xiaobo Qin et al.Jan 31arXiv

This paper teaches a model to make its own helpful hints (sub-questions) and then use those hints to learn better with reinforcement learning that checks answers automatically.

#RLVR#Large Reasoning Models#Sub-question Guidance

Self-Distillation Enables Continual Learning

Intermediate
Idan Shenfeld, Mehul Damani et al.Jan 27arXiv

This paper shows a simple way for AI models to keep learning new things without forgetting what they already know.

#Self-Distillation Fine-Tuning#On-Policy Distillation#Continual Learning

The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models

Beginner
Zanlin Ni, Shenzhi Wang et al.Jan 21arXiv

Diffusion language models can write tokens in any order, but that freedom can accidentally hurt their ability to reason well.

#diffusion language model#arbitrary order generation#autoregressive training

AgencyBench: Benchmarking the Frontiers of Autonomous Agents in 1M-Token Real-World Contexts

Intermediate
Keyu Li, Junhao Shi et al.Jan 16arXiv

AgencyBench is a giant test that checks how well AI agents can handle real, long, multi-step jobs, not just short puzzles.

#autonomous agents#long-horizon evaluation#agent benchmarking

Multiplex Thinking: Reasoning via Token-wise Branch-and-Merge

Intermediate
Yao Tang, Li Dong et al.Jan 13arXiv

The paper introduces Multiplex Thinking, a new way for AI to think by sampling several likely next words at once and blending them into a single super-token.

#Multiplex Thinking#chain-of-thought#continuous token