🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers13

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#large language models

On-Policy Self-Distillation for Reasoning Compression

Beginner
Hejian Sang, Yuanda Xu et al.Mar 5arXiv

Reasoning models often talk too much, and those extra words can actually make them more wrong.

#on-policy self-distillation#reasoning compression#conciseness instruction

CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production

Intermediate
Yixin Nie, Lin Guan et al.Mar 2arXiv

CharacterFlywheel is a step‑by‑step loop that steadily improves chatty AI characters by learning from real conversations on Instagram, WhatsApp, and Messenger.

#CharacterFlywheel#large language models#conversational AI

MobilityBench: A Benchmark for Evaluating Route-Planning Agents in Real-World Mobility Scenarios

Beginner
Zhiheng Song, Jingshuai Zhang et al.Feb 26arXiv

MobilityBench is a big, carefully built test that checks how well AI helpers can plan real-world routes using natural language and map tools.

#MobilityBench#route-planning agents#large language models

ROCKET: Rapid Optimization via Calibration-guided Knapsack Enhanced Truncation for Efficient Model Compression

Intermediate
Ammar Ali, Baher Mohammad et al.Feb 11arXiv

ROCKET is a fast, training-free way to shrink big AI models while keeping most of their smarts.

#model compression#training-free compression#sparse factorization

Unveiling Implicit Advantage Symmetry: Why GRPO Struggles with Exploration and Difficulty Adaptation

Intermediate
Zhiqi Yu, Zhangquan Chen et al.Feb 5arXiv

The paper finds a hidden symmetry inside GRPO’s advantage calculation that accidentally stops models from exploring new good answers and from paying the right attention to easy versus hard problems at the right times.

#GRPO#GRAE#A-GRAE

Learning Rate Matters: Vanilla LoRA May Suffice for LLM Fine-tuning

Beginner
Yu-Ang Lee, Ching-Yun Ko et al.Feb 4arXiv

When you tune the learning rate carefully, plain old LoRA fine-tuning works about as well as fancy new versions.

#LoRA#parameter-efficient fine-tuning#learning rate tuning

Enhancing Sentiment Classification and Irony Detection in Large Language Models through Advanced Prompt Engineering Techniques

Beginner
Marvin Schmitt, Anne Schwerk et al.Jan 13arXiv

Giving large language models a few good examples and step-by-step instructions can make them much better at spotting feelings in text.

#prompt engineering#few-shot learning#chain-of-thought

Same Claim, Different Judgment: Benchmarking Scenario-Induced Bias in Multilingual Financial Misinformation Detection

Beginner
Zhiwei Liu, Yupen Cao et al.Jan 8arXiv

This paper builds MFMD-Scen, a big test to see how AI changes its truth/false judgments about the same money-related claim when the situation around it changes.

#financial misinformation detection#scenario-induced bias#multilingual benchmark

Digital Twin AI: Opportunities and Challenges from Large Language Models to World Models

Intermediate
Rong Zhou, Dongping Chen et al.Jan 4arXiv

A digital twin is a living computer copy of a real thing (like a bridge, a heart, or a factory) that stays in sync with sensors and helps us predict, fix, and improve the real thing.

#digital twin#physics-informed AI#neural operators

Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits

Intermediate
Amirhosein Ghasemabadi, Di NiuDec 23arXiv

Large language models often sound confident even when they are wrong, and existing ways to catch mistakes are slow or not very accurate.

#self-awareness#large language models#hidden states

Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement Learning

Intermediate
Zhenpeng Su, Leiyu Pan et al.Dec 5arXiv

Reinforcement learning (RL) can make big language models smarter, but off-policy training often pushes updates too far from the “safe zone,” causing unstable learning.

#reinforcement learning#PPO-clip#KL penalty

BEAVER: An Efficient Deterministic LLM Verifier

Intermediate
Tarun Suresh, Nalin Wadhwa et al.Dec 5arXiv

BEAVER is a new way to check, with guaranteed certainty, how likely a language model is to give answers that obey important rules.

#BEAVER#deterministic verification#large language models
12