🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers38

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#retrieval-augmented generation

AgentLongBench: A Controllable Long Benchmark For Long-Contexts Agents via Environment Rollouts

Intermediate
Shicheng Fang, Yuxin Wang et al.Jan 28arXiv

AgentLongBench is a new test that checks how well AI agents think over very long stories made of their own actions and the world's replies, not just by reading static documents.

#AgentLongBench#long-context agents#environment rollouts

PaperSearchQA: Learning to Search and Reason over Scientific Papers with RLVR

Intermediate
James Burgess, Jan N. Hansen et al.Jan 26arXiv

This paper teaches a language-model agent to look up facts in millions of scientific paper summaries and answer clear, single-answer questions.

#RLVR#search agents#PaperSearchQA

SAGE: Steerable Agentic Data Generation for Deep Search with Execution Feedback

Intermediate
Fangyuan Xu, Rujun Han et al.Jan 26arXiv

SAGE is a two-agent system that automatically writes tough, multi-step search questions and checks them by actually trying to solve them.

#deep search#agentic data generation#execution feedback

Typhoon-S: Minimal Open Post-Training for Sovereign Large Language Models

Beginner
Kunat Pipatanakul, Pittawat TaveekitworachaiJan 26arXiv

Typhoon-S is a simple, open recipe that turns a basic language model into a helpful assistant and then teaches it important local skills, all on small budgets.

#Typhoon-S#on-policy distillation#full-logits distillation

DRPG (Decompose, Retrieve, Plan, Generate): An Agentic Framework for Academic Rebuttal

Intermediate
Peixuan Han, Yingjie Yu et al.Jan 26arXiv

DRPG is a four-step AI helper that writes strong academic rebuttals by first breaking a review into parts, then fetching evidence, planning a strategy, and finally writing the response.

#academic rebuttal#agentic framework#planning with LLMs

A Mechanistic View on Video Generation as World Models: State and Dynamics

Intermediate
Luozhou Wang, Zhifei Chen et al.Jan 22arXiv

This paper says modern video generators are starting to act like tiny "world simulators," not just pretty video painters.

#world models#video generation#state representation

Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of Mind

Beginner
Zhitao He, Zongwei Lyu et al.Jan 22arXiv

Academic rebuttals are not just about being polite; they are about smart, strategic persuasion under hidden information.

#academic rebuttal#theory of mind#strategic persuasion

Toward Efficient Agents: Memory, Tool learning, and Planning

Intermediate
Xiaofang Yang, Lijun Li et al.Jan 20arXiv

This survey explains how to make AI agents not just smart, but also efficient with their time, memory, and tool use.

#agent efficiency#memory compression#tool learning

M^4olGen: Multi-Agent, Multi-Stage Molecular Generation under Precise Multi-Property Constraints

Intermediate
Yizhan Li, Florence Cloutier et al.Jan 15arXiv

The paper introduces M^4olGen, a two-stage system that designs new molecules to match exact numbers for several properties (like QED, LogP, MW, HOMO, LUMO) at the same time.

#molecular generation#multi-property optimization#fragment-level editing

Collaborative Multi-Agent Test-Time Reinforcement Learning for Reasoning

Intermediate
Zhiyuan Hu, Yunhai Hu et al.Jan 14arXiv

This paper introduces MATTRL, a way for multiple AI agents to learn from their own conversations at test time using short, reusable text notes instead of retraining their weights.

#multi-agent systems#test-time reinforcement learning#experience retrieval

The AI Hippocampus: How Far are We From Human Memory?

Intermediate
Zixia Jia, Jiaqi Li et al.Jan 14arXiv

This survey asks how close AI memory systems are to human memory and organizes the answer into three parts: implicit memory (inside the model), explicit memory (outside storage you can look up), and agentic memory (what an AI agent keeps over time to plan and act).

#LLM memory#implicit memory#explicit memory

Lost in the Noise: How Reasoning Models Fail with Contextual Distractors

Intermediate
Seongyun Lee, Yongrae Jo et al.Jan 12arXiv

The paper shows that when we give AI lots of extra text, even harmless extra text, it can get badly confused—sometimes losing up to 80% of its accuracy.

#NoisyBench#Rationale-Aware Reward#RARE
1234