🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Search

"retrieval-augmented generation"17 resultsKeyword

A-RAG: Scaling Agentic Retrieval-Augmented Generation via Hierarchical Retrieval Interfaces

Intermediate
Mingxuan Du, Benfeng Xu et al.Feb 3arXiv

A-RAG lets the AI choose how to search, what to read, and when to stop, instead of following a fixed recipe.

#Agentic RAG#Hierarchical Retrieval Interfaces#Keyword Search

FABLE: Forest-Based Adaptive Bi-Path LLM-Enhanced Retrieval for Multi-Document Reasoning

Intermediate
Lin Sun, Linglin Zhang et al.Jan 26arXiv

FABLE is a new retrieval system that helps AI find and combine facts from many documents by letting the AI both organize the library and choose the right shelves to read.

#FABLE#Structured RAG#Hierarchical retrieval

Agentic-R: Learning to Retrieve for Agentic Search

Intermediate
Wenhan Liu, Xinyu Ma et al.Jan 17arXiv

Agentic-R is a new way to teach a search retriever to find not just similar text, but the text that truly helps an AI get the final answer right.

#agentic search#retriever training#passage utility modeling

NAACL: Noise-AwAre Verbal Confidence Calibration for LLMs in RAG Systems

Intermediate
Jiayu Liu, Rui Wang et al.Jan 16arXiv

The paper studies why large language models (LLMs) sound too sure of themselves when using retrieval-augmented generation (RAG) and how to fix it.

#Retrieval-Augmented Generation#Confidence Calibration#Expected Calibration Error

DeepCode: Open Agentic Coding

Beginner
Zongwei Li, Zhonghang Li et al.Dec 8arXiv

DeepCode is an AI coding system that turns long, complicated papers into full, working code repositories.

#agentic coding#document-to-code#information-flow management

Multi-hop Reasoning via Early Knowledge Alignment

Intermediate
Yuxin Wang, Shicheng Fang et al.Dec 23arXiv

This paper adds a tiny but powerful step called Early Knowledge Alignment (EKA) to multi-step retrieval systems so the model takes a quick, smart look at relevant information before it starts planning.

#Retrieval-Augmented Generation#Iterative RAG#Multi-hop Reasoning

FaithLens: Detecting and Explaining Faithfulness Hallucination

Intermediate
Shuzheng Si, Qingyi Wang et al.Dec 23arXiv

Large language models can say things that sound right but aren’t supported by the given document; this is called a faithfulness hallucination.

#faithfulness hallucination#hallucination detection#explainable AI

QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation

Intermediate
Dehai Min, Kailin Zhang et al.Dec 22arXiv

QuCo-RAG is a new way to decide when an AI should look things up while it writes, using facts from its training data instead of its own shaky confidence.

#Dynamic RAG#Retrieval-Augmented Generation#Uncertainty Quantification

Panini: Continual Learning in Token Space via Structured Memory

Intermediate
Shreyas Rajesh, Pavan Holur et al.Feb 16arXiv

Panini is a way for AI to keep learning new facts without changing its brain by storing them as tiny linked Q&A facts in an external memory.

#non-parametric continual learning#structured memory#Generative Semantic Workspace

Breaking the Static Graph: Context-Aware Traversal for Robust Retrieval-Augmented Generation

Intermediate
Kwun Hang Lau, Fangyuan Zhang et al.Feb 2arXiv

CatRAG is a new way for AI to find the right facts by letting the knowledge graph change its paths based on each question.

#Retrieval-Augmented Generation#Knowledge Graph#Personalized PageRank

Benchmarking Large Language Models for Knowledge Graph Validation

Beginner
Farzad Shami, Stefano Marchesin et al.Feb 11arXiv

Knowledge graphs are like giant fact maps, and keeping every fact correct is hard and important.

#Knowledge Graph Validation#Fact Checking#Large Language Models

OpenDecoder: Open Large Language Model Decoding to Incorporate Document Quality in RAG

Intermediate
Fengran Mo, Zhan Su et al.Jan 13arXiv

OpenDecoder teaches large language models (LLMs) to pay more attention to better documents during Retrieval-Augmented Generation (RAG).

#Retrieval-Augmented Generation#LLM Decoding#Attention Modulation

Agentic Very Long Video Understanding

Intermediate
Aniket Rege, Arka Sadhu et al.Jan 26arXiv

The paper tackles understanding super long, first‑person videos (days to a week) by giving an AI a smarter memory and better tools.

#entity scene graph#agentic planning#long-horizon video understanding

Mindscape-Aware Retrieval Augmented Generation for Improved Long Context Understanding

Intermediate
Yuqing Li, Jiangnan Li et al.Dec 19arXiv

Humans keep a big-picture memory (a “mindscape”) when reading long things; this paper teaches AI to do the same.

#Retrieval-Augmented Generation#Mindscape#Hierarchical Summarization

Improving Multi-step RAG with Hypergraph-based Memory for Long-Context Complex Relational Modeling

Intermediate
Chulun Zhou, Chunkang Zhang et al.Dec 30arXiv

Multi-step RAG systems often struggle with long documents because their memory is just a pile of isolated facts, not a connected understanding.

#multi-step RAG#hypergraph memory#hyperedge merging

WildGraphBench: Benchmarking GraphRAG with Wild-Source Corpora

Beginner
Pengyu Wang, Benfeng Xu et al.Feb 2arXiv

WildGraphBench is a new test that checks how well GraphRAG systems find and combine facts from messy, real-world web pages.

#GraphRAG#Retrieval-Augmented Generation#Wikipedia references

ViDoRe V3: A Comprehensive Evaluation of Retrieval Augmented Generation in Complex Real-World Scenarios

Intermediate
António Loison, Quentin Macé et al.Jan 13arXiv

ViDoRe V3 is a big, carefully built test that checks how well AI systems find and use information from both text and pictures (like tables and charts) in real documents.

#Retrieval-Augmented Generation#Multimodal RAG#Visual Document Understanding