🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers196

AllBeginnerIntermediateAdvanced
All SourcesarXiv

Atlas: Orchestrating Heterogeneous Models and Tools for Multi-Domain Complex Reasoning

Beginner
Jinyang Wu, Guocheng Zhai et al.Jan 7arXiv

ATLAS is a system that picks the best mix of AI models and helper tools for each question, instead of using just one model or a fixed tool plan.

#ATLAS#LLM routing#tool augmentation

What Users Leave Unsaid: Under-Specified Queries Limit Vision-Language Models

Beginner
Dasol Choi, Guijin Son et al.Jan 7arXiv

Real people often ask vague questions with pictures, and today’s vision-language models (VLMs) struggle with them.

#vision-language models#under-specified queries#query explicitation

ThinkRL-Edit: Thinking in Reinforcement Learning for Reasoning-Centric Image Editing

Beginner
Hengjia Li, Liming Jiang et al.Jan 6arXiv

ThinkRL-Edit teaches an image editor to think first and draw second, which makes tricky, reasoning-heavy edits much more accurate.

#reasoning-centric image editing#reinforcement learning#chain-of-thought

Enhancing Linguistic Competence of Language Models through Pre-training with Language Learning Tasks

Beginner
Atsuki Yamaguchi, Maggie Mi et al.Jan 6arXiv

The paper teaches language models using extra 'language homework' made from the same raw text so they learn grammar and meaning, not just next-word guessing.

#language model pretraining#causal language modeling#linguistic competence

UniCorn: Towards Self-Improving Unified Multimodal Models through Self-Generated Supervision

Beginner
Ruiyan Han, Zhen Fang et al.Jan 6arXiv

This paper fixes a common problem in multimodal AI: models can understand pictures and words well but stumble when asked to create matching images.

#Unified Multimodal Models#Self-Generated Supervision#Conduction Aphasia

One Sample to Rule Them All: Extreme Data Efficiency in RL Scaling

Beginner
Yiyuan Li, Zhen Huang et al.Jan 6arXiv

This paper shows that training a language model with reinforcement learning on just one super well-designed example can boost reasoning across many school subjects, not just math.

#polymath learning#one-shot reinforcement learning#GRPO

Large Reasoning Models Are (Not Yet) Multilingual Latent Reasoners

Beginner
Yihong Liu, Raoyuan Zhao et al.Jan 6arXiv

Large reasoning models can often find the right math answer in their “head” before finishing their written steps, but this works best in languages with lots of training data like English and Chinese.

#latent reasoning#chain-of-thought#multilingual LLMs

VINO: A Unified Visual Generator with Interleaved OmniModal Context

Beginner
Junyi Chen, Tong He et al.Jan 5arXiv

VINO is a single AI model that can make and edit both images and videos by listening to text and looking at reference pictures and clips at the same time.

#VINO#unified visual generator#multimodal diffusion transformer

Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling

Beginner
Falcon LLM Team, Iheb Chaabane et al.Jan 5arXiv

Falcon-H1R is a small (7B) AI model that thinks really well without needing giant computers.

#Falcon-H1R#Hybrid Transformer-Mamba#Chain-of-Thought

OpenRT: An Open-Source Red Teaming Framework for Multimodal LLMs

Beginner
Xin Wang, Yunhao Chen et al.Jan 4arXiv

OpenRT is a big, open-source test bench that safely stress-tests AI models that handle both text and images.

#OpenRT#red teaming#multimodal LLM

MOSS Transcribe Diarize Technical Report

Beginner
MOSI. AI, : et al.Jan 4arXiv

This paper introduces MOSS Transcribe Diarize, a single model that writes down what people say in a conversation, tells who said each part, and marks the exact times—all in one go.

#speaker diarization#speech recognition#end-to-end SATS

SpaceTimePilot: Generative Rendering of Dynamic Scenes Across Space and Time

Beginner
Zhening Huang, Hyeonho Jeong et al.Dec 31arXiv

SpaceTimePilot is a video AI that lets you steer both where the camera goes (space) and how the action plays (time) from one input video.

#video diffusion#space–time disentanglement#camera control
1011121314