🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers1055

AllBeginnerIntermediateAdvanced
All SourcesarXiv

Search-R2: Enhancing Search-Integrated Reasoning via Actor-Refiner Collaboration

Intermediate
Bowei He, Minda Hu et al.Feb 3arXiv

This paper teaches AI to look things up on the web and fix its own mistakes mid-thought instead of starting over from scratch.

#search-integrated reasoning#reinforcement learning#credit assignment

Learning Query-Specific Rubrics from Human Preferences for DeepResearch Report Generation

Intermediate
Changze Lv, Jie Zhou et al.Feb 3arXiv

DeepResearch agents write long, evidence-based reports, but teaching and grading them is hard because there is no single 'right answer' to score against.

#DeepResearch#query-specific rubrics#human preference learning

HY3D-Bench: Generation of 3D Assets

Intermediate
Team Hunyuan3D, : et al.Feb 3arXiv

HY3D-Bench is a complete, open-source “starter kit” for making and studying high-quality 3D objects.

#HY3D-Bench#watertight meshes#part-level decomposition

HySparse: A Hybrid Sparse Attention Architecture with Oracle Token Selection and KV Cache Sharing

Intermediate
Yizhao Gao, Jianyu Wei et al.Feb 3arXiv

HySparse is a new way for AI models to pay attention that mixes a few full attention layers with many fast, memory‑saving sparse layers.

#Hybrid Sparse Attention#Oracle Token Selection#KV Cache Sharing

Semantic Routing: Exploring Multi-Layer LLM Feature Weighting for Diffusion Transformers

Intermediate
Bozhou Li, Yushuo Guan et al.Feb 3arXiv

The paper shows that using information from many layers of a language model (not just one) helps text-to-image diffusion transformers follow prompts much better.

#Diffusion Transformer#Text Conditioning#Multi-layer LLM Features

A-RAG: Scaling Agentic Retrieval-Augmented Generation via Hierarchical Retrieval Interfaces

Intermediate
Mingxuan Du, Benfeng Xu et al.Feb 3arXiv

A-RAG lets the AI choose how to search, what to read, and when to stop, instead of following a fixed recipe.

#Agentic RAG#Hierarchical Retrieval Interfaces#Keyword Search

SWE-World: Building Software Engineering Agents in Docker-Free Environments

Intermediate
Shuang Sun, Huatong Song et al.Feb 3arXiv

SWE-World lets code-fixing AI agents practice and learn without heavy Docker containers by using smart models that pretend to be the computer and tests.

#SWE-World#software engineering agents#Docker-free training

SWE-Master: Unleashing the Potential of Software Engineering Agents via Post-Training

Intermediate
Huatong Song, Lisheng Huang et al.Feb 3arXiv

SWE-Master is a fully open, step-by-step recipe for turning a regular coding model into a strong software-fixing agent that works across many steps, files, and tests.

#SWE-Master#software engineering agent#long-horizon SFT

On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models

Intermediate
Shumin Wang, Yuexiang Xie et al.Feb 3arXiv

The paper builds a simple, math-light rule to predict whether training makes a language model more open-minded (higher entropy) or more sure of itself (lower entropy).

#reinforcement fine-tuning#entropy dynamics#GRPO

MeKi: Memory-based Expert Knowledge Injection for Efficient LLM Scaling

Intermediate
Ning Ding, Fangcheng Liu et al.Feb 3arXiv

MeKi is a new way to grow a language model’s knowledge by using storage (ROM) instead of extra heavy calculations (FLOPs).

#MeKi#memory-based scaling#token-level experts

Accurate Failure Prediction in Agents Does Not Imply Effective Failure Prevention

Intermediate
Rakshith Vasudev, Melisa Russak et al.Feb 3arXiv

The paper shows that even if a model is great at predicting when an AI agent will fail, jumping in to “fix” the agent mid-task can still make things worse.

#LLM critic#execution-time intervention#disruption–recovery tradeoff

Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

Intermediate
Dongwon Jo, Beomseok Kang et al.Feb 3arXiv

This paper speeds up how AI models read very long texts by carefully choosing which words (tokens) to focus on at each step.

#Token Sparse Attention#Dynamic Token Coverage#Representation Drift
2526272829