🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Search

"RAG"20 resultsKeyword

Natively Adaptive Interfaces: A new framework for AI accessibility

Google AI BlogFeb 5Google

How Google Cloud is helping Team USA elevate their tricks with AI

Google AI BlogFeb 5Google

Watch our new Gemini ad ahead of football’s biggest weekend

Google AI BlogFeb 5Google

The latest AI news we announced in January

Google AI BlogFeb 4Google

How we’re helping preserve the genetic information of endangered species with AI

Google AI BlogFeb 2Google

Recursive Language Models

Beginner
Alex L. Zhang, Tim Kraska et al.Dec 31arXiv

Recursive Language Models (RLMs) let an AI read and work with prompts that are much longer than its normal memory by treating the prompt like a big external document it can open, search, and study with code.

#Recursive Language Models#RLM#Long-context reasoning

V-Retrver: Evidence-Driven Agentic Reasoning for Universal Multimodal Retrieval

Intermediate
Dongyang Chen, Chaoyang Wang et al.Feb 5arXiv

V-Retrver is a new way for AI to search across text and images by double-checking tiny visual details instead of only guessing from words.

#V-Retrver#multimodal retrieval#agentic reasoning

LatentMem: Customizing Latent Memory for Multi-Agent Systems

Intermediate
Muxin Fu, Guibin Zhang et al.Feb 3arXiv

LatentMem is a new memory system that helps teams of AI agents remember the right things for their specific jobs without overloading them with text.

#multi-agent systems#latent memory#role-aware memory

Reinforcement World Model Learning for LLM-based Agents

Intermediate
Xiao Yu, Baolin Peng et al.Feb 5arXiv

Large language models are great at words, but they struggle to predict what will happen after they act in a changing world.

#Reinforcement World Model Learning#world modeling#LLM agents

SocialVeil: Probing Social Intelligence of Language Agents under Communication Barriers

Intermediate
Keyang Xuan, Pengda Wang et al.Feb 4arXiv

This paper builds SocialVeil, a testing world where AI chat agents must talk to each other even when communication is messy, not perfect.

#social intelligence#communication barriers#semantic vagueness

ProAct: Agentic Lookahead in Interactive Environments

Intermediate
Yangbin Yu, Mingyu Yang et al.Feb 5arXiv

ProAct teaches AI agents to think ahead accurately without needing expensive search every time they act.

#ProAct#GLAD#MC-Critic

No Global Plan in Chain-of-Thought: Uncover the Latent Planning Horizon of LLMs

Intermediate
Liyan Xu, Mo Yu et al.Feb 2arXiv

Large language models don’t map out a full step-by-step plan before they start thinking; they mostly plan just a little bit ahead.

#chain-of-thought#latent planning horizon#Tele-Lens

A-RAG: Scaling Agentic Retrieval-Augmented Generation via Hierarchical Retrieval Interfaces

Intermediate
Mingxuan Du, Benfeng Xu et al.Feb 3arXiv

A-RAG lets the AI choose how to search, what to read, and when to stop, instead of following a fixed recipe.

#Agentic RAG#Hierarchical Retrieval Interfaces#Keyword Search

CL-bench: A Benchmark for Context Learning

Beginner
Shihan Dou, Ming Zhang et al.Feb 3arXiv

CL-bench is a new test that checks whether AI can truly learn new things from the information you give it right now, not just from what it memorized before.

#context learning#benchmark#rubric-based evaluation

Horizon-LM: A RAM-Centric Architecture for LLM Training

Intermediate
Zhengqing Yuan, Lichao Sun et al.Feb 4arXiv

Horizon-LM flips the usual training setup by keeping all long-term model stuff in the computer’s RAM (CPU) and using the GPU only as a fast, temporary calculator.

#Horizon-LM#memory-centric training#CPU-master GPU-template

T2S-Bench & Structure-of-Thought: Benchmarking and Prompting Comprehensive Text-to-Structure Reasoning

Intermediate
Qinsi Wang, Hancheng Ye et al.Mar 4arXiv

This paper shows that teaching AI to first draw a simple map of a text (nodes and links) before answering questions makes it smarter and more reliable.

#Structure of Thought#Text-to-Structure#Intermediate Representation

Self-Hinting Language Models Enhance Reinforcement Learning

Intermediate
Baohao Liao, Hanze Dong et al.Feb 3arXiv

When rewards are rare, a popular training method for language models (GRPO) often stops learning because every try in a group gets the same score, so there is nothing to compare.

#reinforcement learning#GRPO#self-hinting

AdaptMMBench: Benchmarking Adaptive Multimodal Reasoning for Mode Selection and Reasoning Process

Intermediate
Xintong Zhang, Xiaowen Zhang et al.Feb 2arXiv

AdaptMMBench is a new test that checks if AI models know when to just look and think, and when to use extra visual tools like zooming or brightening an image.

#Adaptive Multimodal Reasoning#Vision-Language Models#Tool Invocation

LRAgent: Efficient KV Cache Sharing for Multi-LoRA LLM Agents

Intermediate
Hyesung Jeon, Hyeongju Ha et al.Feb 1arXiv

Multi-agent LLM systems often use LoRA adapters so each agent has a special role, but they all rebuild almost the same KV cache, wasting memory and time.

#LoRA#Multi-LoRA#KV cache

FASA: Frequency-aware Sparse Attention

Intermediate
Yifei Wang, Yueqi Wang et al.Feb 3arXiv

FASA is a training-free method that makes large language models faster and lighter on memory by keeping only the most useful past tokens during decoding.

#FASA#Frequency-aware sparse attention#KV cache compression