🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers32

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#RLVR

Heterogeneous Agent Collaborative Reinforcement Learning

Intermediate
Zhixia Zhang, Zixuan Huang et al.Mar 3arXiv

This paper introduces HACRL, a way for different kinds of AI agents to learn together during training but still work alone during use.

#HACRL#HACPO#heterogeneous agents

Efficient RLVR Training via Weighted Mutual Information Data Selection

Intermediate
Xinyu Zhou, Boyu Zhu et al.Mar 2arXiv

Reinforcement learning (RL) trains language models by letting them try answers and learn from rewards, but training is slow if we pick the wrong practice questions.

#Reinforcement Learning#RLVR#Data Selection

Learn Hard Problems During RL with Reference Guided Fine-tuning

Intermediate
Yangzhen Wu, Shanda Li et al.Mar 1arXiv

ReGFT is a simple pre-RL step that shows the model partial human hints, then makes it solve problems in its own words, creating correct, model-style solutions for hard questions.

#Reference-Guided Fine-Tuning#ReGFT#ReFT

DSDR: Dual-Scale Diversity Regularization for Exploration in LLM Reasoning

Intermediate
Zhongwei Wan, Yun Shen et al.Feb 23arXiv

LLMs trained with simple rewards often latch onto just a few ways of solving problems and stop exploring, which hurts their ability to find other correct answers.

#DSDR#dual-scale diversity#RLVR

DeepVision-103K: A Visually Diverse, Broad-Coverage, and Verifiable Mathematical Dataset for Multimodal Reasoning

Intermediate
Haoxiang Sun, Lizhen Xu et al.Feb 18arXiv

DeepVision-103K is a new 103,000-example picture-and-text math dataset designed to help AI think better using rewards that can be checked automatically.

#DeepVision-103K#multimodal reasoning#RLVR

Gaia2: Benchmarking LLM Agents on Dynamic and Asynchronous Environments

Intermediate
Romain Froger, Pierre Andrews et al.Feb 12arXiv

Gaia2 is a new test that measures how well AI agents handle real-life messiness like changing events, deadlines, and team coordination.

#Gaia2#ARE platform#asynchronous environments

PhyCritic: Multimodal Critic Models for Physical AI

Intermediate
Tianyi Xiong, Shihao Wang et al.Feb 11arXiv

PhyCritic is a judge model that checks other AI models’ answers about the physical world, like cooking steps, robot actions, or driving choices.

#Physical AI#Multimodal critic#Self-referential training

Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters

Intermediate
Ailin Huang, Ang Li et al.Feb 11arXiv

Step 3.5 Flash is a huge but efficient AI that keeps 196 billion total parameters but only wakes up about 11 billion per token, so it thinks smart and fast.

#Sparse Mixture-of-Experts#Sliding-Window Attention#Head-wise Gated Attention

Internalizing Meta-Experience into Memory for Guided Reinforcement Learning in Large Language Models

Intermediate
Shiting Huang, Zecheng Li et al.Feb 10arXiv

The paper teaches large language models to do what good students do: find where they went wrong, turn that lesson into a rule, and remember it for next time.

#Reinforcement Learning with Verifiable Rewards#RLVR#Meta-Experience Learning

Beyond Correctness: Learning Robust Reasoning via Transfer

Intermediate
Hyunseok Lee, Soheil Abbasloo et al.Feb 9arXiv

This paper teaches language models not just to get the final answer right but to think in a way others can reliably follow.

#Reinforcement Learning with Transferable Reward#RLTR#Reasoning Transferability

Does Your Reasoning Model Implicitly Know When to Stop Thinking?

Intermediate
Zixuan Huang, Xin Xia et al.Feb 9arXiv

Big AI reasoning models often keep thinking long after they already found the right answer, wasting time and tokens.

#SAGE#efficient reasoning#chain of thought

Self-Improving Multilingual Long Reasoning via Translation-Reasoning Integrated Training

Intermediate
Junxiao Liu, Zhijun Wang et al.Feb 5arXiv

TRIT is a new training method that teaches AI to translate and think at the same time so it can solve hard problems in many languages without extra helper models.

#multilingual reasoning#translation-reasoning integration#self-translation
123