🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
Search
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers915

AllBeginnerIntermediateAdvanced
All SourcesarXiv

Statistical Estimation of Adversarial Risk in Large Language Models under Best-of-N Sampling

Intermediate
Mingqian Feng, Xiaodong Liu et al.Jan 30arXiv

Real attackers can try many prompts in parallel until a model slips, so testing safety with only one try badly underestimates risk.

#Best-of-N sampling#Adversarial risk#Attack Success Rate (ASR)

TTCS: Test-Time Curriculum Synthesis for Self-Evolving

Intermediate
Chengyi Yang, Zhishang Xiang et al.Jan 30arXiv

TTCS is a way for a model to teach itself during the test by first making easier practice questions that are similar to the real hard question and then learning from them.

#test-time training#test-time reinforcement learning#curriculum learning

Rethinking LLM-as-a-Judge: Representation-as-a-Judge with Small Language Models via Semantic Capacity Asymmetry

Intermediate
Zhuochun Li, Yong Zhang et al.Jan 30arXiv

Big models are often used to grade AI answers, but they are expensive, slow, and depend too much on tricky prompts.

#Representation-as-a-Judge#Semantic Capacity Asymmetry#LLM-as-a-Judge

SSL: Sweet Spot Learning for Differentiated Guidance in Agentic Optimization

Beginner
Jinyang Wu, Changpeng Yang et al.Jan 30arXiv

Most reinforcement learning agents only get a simple pass/fail reward, which hides how good or bad their attempts really were.

#Sweet Spot Learning#tiered rewards#reinforcement learning with verifiable rewards

RAPTOR: Ridge-Adaptive Logistic Probes

Intermediate
Ziqi Gao, Yaotian Zhu et al.Jan 29arXiv

RAPTOR is a simple, fast way to find a direction (a concept vector) inside a frozen language model that points toward a concept like 'sarcasm' or 'positivity.'

#probing#concept vectors#activation steering

One-step Latent-free Image Generation with Pixel Mean Flows

Beginner
Yiyang Lu, Susie Lu et al.Jan 29arXiv

This paper shows how to make a whole picture in one go, directly in pixels, without using a hidden “latent” space or many tiny steps.

#pixel MeanFlow#one-step generation#x-prediction

Discovering Hidden Gems in Model Repositories

Intermediate
Jonathan Kahana, Eliahu Horwitz et al.Jan 29arXiv

Millions of public AI models exist, but downloads are concentrated on a tiny set of “official” checkpoints, which are not always the best performers.

#hidden gems#model repositories#model trees

Hybrid Linear Attention Done Right: Efficient Distillation and Effective Architectures for Extremely Long Contexts

Intermediate
Yingfa Chen, Zhen Leng Thai et al.Jan 29arXiv

This paper shows how to turn a big Transformer model into a faster hybrid model that mixes attention and RNN layers using far less training data (about 2.3B tokens).

#hybrid attention#RNN attention hybrid#linear attention

Exploring Reasoning Reward Model for Agents

Intermediate
Kaixuan Fan, Kaituo Feng et al.Jan 29arXiv

The paper teaches AI agents better by grading not just their final answers, but also how they think and use tools along the way.

#Agentic Reinforcement Learning#Reasoning Reward Model#Process Supervision

DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation

Intermediate
Haozhe Xie, Beichen Wen et al.Jan 29arXiv

DynamicVLA is a small and fast robot brain that sees, reads, and acts while things are moving.

#Dynamic object manipulation#Vision-Language-Action#Continuous inference

FineInstructions: Scaling Synthetic Instructions to Pre-Training Scale

Intermediate
Ajay Patel, Colin Raffel et al.Jan 29arXiv

Large language models usually learn by guessing the next word, then get a tiny bit of instruction-following practice; this paper flips that by turning massive web documents into instruction-and-answer pairs at huge scale.

#FineInstructions#synthetic instruction–answer pairs#instruction-tuning pre-training

JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion

Intermediate
Anthony Chen, Naomi Ken Korem et al.Jan 29arXiv

This paper shows a simple, one-model way to dub videos that makes the new voice and the lips move together naturally.

#video dubbing#audio-visual diffusion#joint generation
1213141516