🎓How I Study AIHISA
đź“–Read
📄Papers📰Blogs🎬Courses
đź’ˇLearn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers181

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#GRPO

VA-$Ď€$: Variational Policy Alignment for Pixel-Aware Autoregressive Generation

Intermediate
Xinyao Liao, Qiyuan He et al.Dec 22arXiv

Autoregressive (AR) image models make pictures by choosing tokens one-by-one, but they were judged only on picking likely tokens, not on how good the final picture looks in pixels.

#autoregressive image generation#tokenizer–generator alignment#pixel-space reconstruction

Not triaged yet

Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies

Beginner
Yuqiao Tan, Minzheng Wang et al.Dec 22arXiv

Large language models (LLMs) don’t act as a single brain; inside, each layer and module quietly makes its own mini-decisions called internal policies.

#Bottom-up Policy Optimization#internal layer policy#internal modular policy

Not triaged yet

Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding

Beginner
Jiaqi Tang, Jianmin Chen et al.Dec 19arXiv

Robust-R1 teaches vision-language models to notice how a picture is damaged, think through what that damage hides, and then answer as if the picture were clear.

#Robust-R1#degradation-aware reasoning#multimodal large language models

Not triaged yet

Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMs

Intermediate
Rujiao Long, Yang Li et al.Dec 19arXiv

Reasoning Palette gives a language or vision-language model a tiny hidden “mood” (a latent code) before it starts answering, so it chooses a smarter plan rather than just rolling dice on each next word.

#Reasoning Palette#latent contextualization#VAE

Not triaged yet

Reinforcement Learning for Self-Improving Agent with Skill Library

Intermediate
Jiongxiao Wang, Qiaojing Yan et al.Dec 18arXiv

This paper teaches AI agents to learn new reusable skills and get better over time by using reinforcement learning, not just prompts.

#Reinforcement Learning#Skill Library#Sequential Rollout

Not triaged yet

Turn-PPO: Turn-Level Advantage Estimation with PPO for Improved Multi-Turn RL in Agentic LLMs

Intermediate
Junbo Li, Peng Zhou et al.Dec 18arXiv

Turn-PPO is a new way to train chatty AI agents that act over many steps, by judging each conversation turn as one whole action instead of judging every single token.

#Turn-PPO#multi-turn reinforcement learning#agentic LLMs

Not triaged yet

Differences That Matter: Auditing Models for Capability Gap Discovery and Rectification

Intermediate
Qihao Liu, Chengzhi Mao et al.Dec 18arXiv

AuditDM is a friendly 'auditor' model that hunts for where vision-language models get things wrong and then creates the right practice to fix them.

#AuditDM#model auditing#cross-model divergence

Not triaged yet

RePlan: Reasoning-guided Region Planning for Complex Instruction-based Image Editing

Intermediate
Tianyuan Qu, Lei Ke et al.Dec 18arXiv

RePlan is a plan-then-execute system that first figures out exactly where to edit in a picture and then makes clean changes there.

#instruction-based image editing#vision–language model (VLM)#diffusion model

Not triaged yet

JustRL: Scaling a 1.5B LLM with a Simple RL Recipe

Intermediate
Bingxiang He, Zekai Qu et al.Dec 18arXiv

JustRL shows that a tiny, steady recipe for reinforcement learning (RL) can make a 1.5B-parameter language model much better at math without fancy tricks.

#Reinforcement Learning#GRPO#Policy Entropy

Not triaged yet

Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning

Intermediate
Yifei Li, Wenzhao Zheng et al.Dec 17arXiv

Skyra is a detective-style AI that spots tiny visual mistakes (artifacts) in videos to tell if they are real or AI-generated, and it explains its decision with times and places in the video.

#AI-generated video detection#artifact reasoning#multimodal large language model

Not triaged yet

Can LLMs Guide Their Own Exploration? Gradient-Guided Reinforcement Learning for LLM Reasoning

Intermediate
Zhenwen Liang, Sidi Lu et al.Dec 17arXiv

This paper teaches large language models (LLMs) to explore smarter by listening to their own gradients—the directions they would update—rather than chasing random variety.

#gradient-guided reinforcement learning#GRL#GRPO

Not triaged yet

EVOLVE-VLA: Test-Time Training from Environment Feedback for Vision-Language-Action Models

Intermediate
Zechen Bai, Chen Gao et al.Dec 16arXiv

Robots usually learn by copying many demonstrations, which is expensive and makes them brittle when things change.

#EVOLVE-VLA#test-time training#vision-language-action

Not triaged yet

1112131415