🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers22

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#contrastive learning

Agentic-R: Learning to Retrieve for Agentic Search

Intermediate
Wenhan Liu, Xinyu Ma et al.Jan 17arXiv

Agentic-R is a new way to teach a search retriever to find not just similar text, but the text that truly helps an AI get the final answer right.

#agentic search#retriever training#passage utility modeling

Action100M: A Large-scale Video Action Dataset

Intermediate
Delong Chen, Tejaswi Kasarla et al.Jan 15arXiv

Action100M is a gigantic video dataset with about 100 million labeled action moments built automatically from 1.2 million instructional videos.

#Action100M#open-vocabulary action recognition#hierarchical temporal segmentation

HeartMuLa: A Family of Open Sourced Music Foundation Models

Intermediate
Dongchao Yang, Yuxin Xie et al.Jan 15arXiv

HeartMuLa is a family of open-source music AI models that can understand and generate full songs with clear lyrics and strong musical structure.

#music generation#audio tokenizer#residual vector quantization

Qwen3-VL-Embedding and Qwen3-VL-Reranker: A Unified Framework for State-of-the-Art Multimodal Retrieval and Ranking

Intermediate
Mingxin Li, Yanzhao Zhang et al.Jan 8arXiv

This paper builds two teamwork models, Qwen3-VL-Embedding and Qwen3-VL-Reranker, that understand text, images, visual documents, and videos in one shared space so search works across all of them.

#multimodal retrieval#unified embedding space#cross-encoder reranker

C2LLM Technical Report: A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling

Beginner
Jin Qin, Zihan Liao et al.Dec 24arXiv

C2LLM is a new family of code embedding models that helps computers find the right code faster and more accurately.

#code retrieval#embedding model#cross-attention pooling

GRAN-TED: Generating Robust, Aligned, and Nuanced Text Embedding for Diffusion Models

Intermediate
Bozhou Li, Sihan Yang et al.Dec 17arXiv

This paper is about making the words you type into a generator turn into the right pictures and videos more reliably.

#diffusion models#text encoder#multimodal large language model

Towards Scalable Pre-training of Visual Tokenizers for Generation

Intermediate
Jingfeng Yao, Yuda Song et al.Dec 15arXiv

The paper tackles a paradox: visual tokenizers that get great pixel reconstructions often make worse images when used for generation.

#visual tokenizer#latent space#Vision Transformer

Omni-Attribute: Open-vocabulary Attribute Encoder for Visual Concept Personalization

Intermediate
Tsai-Shien Chen, Aliaksandr Siarohin et al.Dec 11arXiv

Omni-Attribute is a new image encoder that learns just the parts of a picture you ask for (like hairstyle or lighting) and ignores the rest.

#open-vocabulary attribute encoder#attribute disentanglement#visual concept personalization

Relational Visual Similarity

Intermediate
Thao Nguyen, Sicheng Mo et al.Dec 8arXiv

Most image-similarity tools only notice how things look (color, shape, class) and miss deeper, human-like connections.

#relational similarity#visual analogy#anonymous captions

M3DR: Towards Universal Multilingual Multimodal Document Retrieval

Intermediate
Adithya S Kolavi, Vyoman JainDec 3arXiv

The paper introduces M3DR, a way for computers to find the right document image no matter which of 22 languages the query or the document uses.

#multilingual retrieval#multimodal retrieval#document image search
12