🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers943

AllBeginnerIntermediateAdvanced
All SourcesarXiv

InfiniDepth: Arbitrary-Resolution and Fine-Grained Depth Estimation with Neural Implicit Fields

Intermediate
Hao Yu, Haotong Lin et al.Jan 6arXiv

InfiniDepth is a new way to predict depth that treats every image location as a smooth, continuous place you can ask for depth, not just the fixed pixels of a grid.

#monocular depth estimation#neural implicit fields#arbitrary resolution depth

LTX-2: Efficient Joint Audio-Visual Foundation Model

Intermediate
Yoav HaCohen, Benny Brazowski et al.Jan 6arXiv

LTX-2 is an open-source model that makes video and sound together from a text prompt, so the picture and audio match in time and meaning.

#text-to-video#text-to-audio#audiovisual generation

UniCorn: Towards Self-Improving Unified Multimodal Models through Self-Generated Supervision

Beginner
Ruiyan Han, Zhen Fang et al.Jan 6arXiv

This paper fixes a common problem in multimodal AI: models can understand pictures and words well but stumble when asked to create matching images.

#Unified Multimodal Models#Self-Generated Supervision#Conduction Aphasia

Unified Thinker: A General Reasoning Modular Core for Image Generation

Intermediate
Sashuai Zhou, Qiang Zhou et al.Jan 6arXiv

Unified Thinker separates “thinking” (planning) from “drawing” (image generation) so complex instructions get turned into clear, doable steps before any pixels are painted.

#reasoning-aware image generation#structured planning#edit-only prompt

One Sample to Rule Them All: Extreme Data Efficiency in RL Scaling

Beginner
Yiyuan Li, Zhen Huang et al.Jan 6arXiv

This paper shows that training a language model with reinforcement learning on just one super well-designed example can boost reasoning across many school subjects, not just math.

#polymath learning#one-shot reinforcement learning#GRPO

SOP: A Scalable Online Post-Training System for Vision-Language-Action Models

Intermediate
Mingjie Pan, Siyuan Feng et al.Jan 6arXiv

This paper introduces SOP, a system that lets many real robots learn new skills online at the same time while keeping one shared brain (policy).

#Scalable Online Post-Training#Vision-Language-Action Models#Actor–Learner Architecture

MMFormalizer: Multimodal Autoformalization in the Wild

Intermediate
Jing Xiong, Qi Han et al.Jan 6arXiv

MMFormalizer is a new system that turns problems with pictures and words (like physics scenes or geometry diagrams) into strict, checkable math statements and proofs.

#multimodal autoformalization#Lean theorem prover#dimensional grounding

Why LLMs Aren't Scientists Yet: Lessons from Four Autonomous Research Attempts

Intermediate
Dhruv Trehan, Paras ChopraJan 6arXiv

The authors built a simple six-agent system to see if today’s AI models could plan, run, and write a research paper mostly on their own.

#autonomous research pipeline#implementation drift#training data bias

Large Reasoning Models Are (Not Yet) Multilingual Latent Reasoners

Beginner
Yihong Liu, Raoyuan Zhao et al.Jan 6arXiv

Large reasoning models can often find the right math answer in their “head” before finishing their written steps, but this works best in languages with lots of training data like English and Chinese.

#latent reasoning#chain-of-thought#multilingual LLMs

Mechanistic Interpretability of Large-Scale Counting in LLMs through a System-2 Strategy

Intermediate
Hosein Hasani, Mohammadali Banayeeanzade et al.Jan 6arXiv

Large language models (LLMs) are good at many math problems but often mess up simple counting when the list gets long.

#mechanistic interpretability#counting in LLMs#System-2 prompting

DreamStyle: A Unified Framework for Video Stylization

Intermediate
Mengtian Li, Jinshu Chen et al.Jan 6arXiv

DreamStyle is a single video-stylizing model that can follow text, copy a style image, or continue from a stylized first frame—without switching tools.

#video stylization#image-to-video (I2V)#token-specific LoRA

MiMo-V2-Flash Technical Report

Intermediate
Xiaomi LLM-Core Team, : et al.Jan 6arXiv

MiMo-V2-Flash is a giant but efficient language model that uses a team-of-experts design to think well while staying fast.

#Mixture-of-Experts#Sliding Window Attention#Global Attention
4445464748