🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers924

AllBeginnerIntermediateAdvanced
All SourcesarXiv

RubricHub: A Comprehensive and Highly Discriminative Rubric Dataset via Automated Coarse-to-Fine Generation

Intermediate
Sunzhu Li, Jiale Zhao et al.Jan 13arXiv

RubricHub is a huge (about 110,000) collection of detailed grading guides (rubrics) for many kinds of questions like health, science, writing, and chat.

#RubricHub#coarse-to-fine rubric generation#multi-model aggregation

UM-Text: A Unified Multimodal Model for Image Understanding and Visual Text Editing

Intermediate
Lichen Ma, Xiaolong Fu et al.Jan 13arXiv

UM-Text is a single AI that understands both your words and your picture to add or change text in images so it looks like it truly belongs there.

#visual text editing#multimodal diffusion#Visual Language Model

SnapGen++: Unleashing Diffusion Transformers for Efficient High-Fidelity Image Generation on Edge Devices

Intermediate
Dongting Hu, Aarush Gupta et al.Jan 13arXiv

This paper shows how to make powerful image‑generating Transformers run fast on phones without needing the cloud.

#Diffusion Transformer#Sparse Attention#Adaptive Sparse Self-Attention

Enhancing Sentiment Classification and Irony Detection in Large Language Models through Advanced Prompt Engineering Techniques

Beginner
Marvin Schmitt, Anne Schwerk et al.Jan 13arXiv

Giving large language models a few good examples and step-by-step instructions can make them much better at spotting feelings in text.

#prompt engineering#few-shot learning#chain-of-thought

User-Oriented Multi-Turn Dialogue Generation with Tool Use at scale

Intermediate
Jungho Cho, Minbyul Jeong et al.Jan 13arXiv

The paper builds a new way to create realistic, long conversations between people and AI that use tools like databases.

#multi-turn dialogue generation#tool use#user simulation

The Agent's First Day: Benchmarking Learning, Exploration, and Scheduling in the Workplace Scenarios

Beginner
Daocheng Fu, Jianbiao Mei et al.Jan 13arXiv

The paper introduces Trainee-Bench, a new way to test AI agents that feels like a real first day at work, with tasks arriving over time, hidden clues, and changing priorities.

#Trainee-Bench#dynamic task scheduling#active exploration

MemoBrain: Executive Memory as an Agentic Brain for Reasoning

Intermediate
Hongjin Qian, Zhao Cao et al.Jan 12arXiv

MemoBrain is like a helpful co-pilot for AI that keeps important thoughts neat and ready so the main thinker (the agent) doesn’t get overwhelmed.

#Executive memory#Tool-augmented agents#Context budget

MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head

Intermediate
Kewei Zhang, Ye Huang et al.Jan 12arXiv

Transformers are powerful but slow because regular self-attention compares every token with every other token, which grows too fast for long sequences.

#Multi-Head Linear Attention#Linear Attention#Self-Attention

More Images, More Problems? A Controlled Analysis of VLM Failure Modes

Intermediate
Anurag Das, Adrian Bulat et al.Jan 12arXiv

Large Vision-Language Models (LVLMs) look great on single images but often stumble when they must reason across multiple images.

#Large Vision-Language Models#Multi-image reasoning#Cross-image aggregation

OS-Symphony: A Holistic Framework for Robust and Generalist Computer-Using Agent

Intermediate
Bowen Yang, Kaiming Jin et al.Jan 12arXiv

Computer-using agents kept forgetting important visual details over long tasks and could not reliably find up-to-date, step-by-step help for unfamiliar apps.

#computer-using agents#vision-language models#milestone memory

Beyond Static Tools: Test-Time Tool Evolution for Scientific Reasoning

Intermediate
Jiaxuan Lu, Ziyu Kong et al.Jan 12arXiv

This paper teaches AI to build and improve its own small computer helpers (tools) while solving science problems, instead of relying only on a fixed toolbox made beforehand.

#Test-Time Tool Evolution#Dynamic tool synthesis#Scientific reasoning

TAG-MoE: Task-Aware Gating for Unified Generative Mixture-of-Experts

Intermediate
Yu Xu, Hongbin Yan et al.Jan 12arXiv

TAG-MoE is a new way to steer Mixture-of-Experts (MoE) models using clear task hints, so the right “mini-experts” handle the right parts of an image job.

#Task-Aware Gating#Mixture-of-Experts#Unified Image Generation
3536373839