🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
Search
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers2

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#DeepScaleR

Likelihood-Based Reward Designs for General LLM Reasoning

Beginner
Ariel Kwiatkowski, Natasha Butt et al.Feb 3arXiv

Binary right/wrong rewards for training reasoning in large language models are hard to design and often too sparse to learn from.

#log-likelihood reward#chain-of-thought (CoT)#reinforcement learning for LLMs

Exploration vs Exploitation: Rethinking RLVR through Clipping, Entropy, and Spurious Reward

Intermediate
Peter Chen, Xiaopeng Li et al.Dec 18arXiv

The paper studies why two opposite-sounding tricks in RL for reasoning—adding random (spurious) rewards and reducing randomness (entropy)—can both seem to help large language models think better.

#RLVR#Group Relative Policy Optimization#ratio clipping