๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐ŸงฉProblems๐ŸŽฏPrompts๐Ÿง Review
Search
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers3

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#reasoning robustness

AgentArk: Distilling Multi-Agent Intelligence into a Single LLM Agent

Intermediate
Yinyi Luo, Yiqiao Jin et al.Feb 3arXiv

AgentArk teaches one language model to think like a whole team of models that debate, so it can solve tough problems quickly without running a long, expensive debate at answer time.

#multi-agent distillation#process reward model#GRPO

Sparse Reward Subsystem in Large Language Models

Intermediate
Guowei Xu, Mert Yuksekgonul et al.Feb 1arXiv

The paper discovers a tiny, special group of neurons inside large language models (LLMs) that act like a reward system in the human brain.

#value neurons#dopamine neurons#reward prediction error

Training Reasoning Models on Saturated Problems via Failure-Prefix Conditioning

Intermediate
Minwu Kim, Safal Shrestha et al.Jan 28arXiv

When training smart language models with RL that use right-or-wrong rewards, learning can stall on 'saturated' problems that the model almost always solves.

#failure-prefix conditioning#RLVR#GRPO