🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers196

AllBeginnerIntermediateAdvanced
All SourcesarXiv

Learning from Next-Frame Prediction: Autoregressive Video Modeling Encodes Effective Representations

Beginner
Jinghan Li, Yang Jin et al.Dec 24arXiv

This paper introduces NExT-Vid, a way to teach a video model by asking it to guess the next frame of a video while parts of the past are hidden.

#autoregressive video pretraining#masked next-frame prediction#context isolation

TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior

Beginner
Gül Sena Altıntaş, Malikeh Ehghaghi et al.Dec 23arXiv

TokSuite is a science lab for tokenizers: it trains 14 language models that are identical in every way except for how they split text into tokens.

#tokenization#tokenizer robustness#Byte Pair Encoding (BPE)

WorldWarp: Propagating 3D Geometry with Asynchronous Video Diffusion

Beginner
Hanyang Kong, Xingyi Yang et al.Dec 22arXiv

WorldWarp is a new method that turns a single photo plus a planned camera path into a long, steady, 3D-consistent video.

#Novel View Synthesis#3D Gaussian Splatting#Spatio-Temporal Diffusion

Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies

Beginner
Yuqiao Tan, Minzheng Wang et al.Dec 22arXiv

Large language models (LLMs) don’t act as a single brain; inside, each layer and module quietly makes its own mini-decisions called internal policies.

#Bottom-up Policy Optimization#internal layer policy#internal modular policy

Real2Edit2Real: Generating Robotic Demonstrations via a 3D Control Interface

Beginner
Yujie Zhao, Hongwei Fan et al.Dec 22arXiv

Robots learn better when they see many examples, but collecting lots of real videos is slow and expensive.

#robotic demonstration generation#depth-controlled video generation#metric-scale 3D reconstruction

MemEvolve: Meta-Evolution of Agent Memory Systems

Beginner
Guibin Zhang, Haotian Ren et al.Dec 21arXiv

MemEvolve teaches AI agents not only to remember past experiences but also to improve the way they remember, like a student who upgrades their study habits over time.

#LLM agents#agent memory#meta-evolution

Both Semantics and Reconstruction Matter: Making Representation Encoders Ready for Text-to-Image Generation and Editing

Beginner
Shilong Zhang, He Zhang et al.Dec 19arXiv

This paper shows that great image understanding features alone are not enough for making great images; you also need strong pixel-level detail.

#Pixel–Semantic VAE#Semantic Regularization#Off-Manifold Generation

Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding

Beginner
Jiaqi Tang, Jianmin Chen et al.Dec 19arXiv

Robust-R1 teaches vision-language models to notice how a picture is damaged, think through what that damage hides, and then answer as if the picture were clear.

#Robust-R1#degradation-aware reasoning#multimodal large language models

Next-Embedding Prediction Makes Strong Vision Learners

Beginner
Sihan Xu, Ziqiao Ma et al.Dec 18arXiv

This paper introduces NEPA, a very simple way to teach vision models by having them predict the next patch’s embedding in an image sequence, just like language models predict the next word.

#self-supervised learning#vision transformer#autoregression

Trainable Log-linear Sparse Attention for Efficient Diffusion Transformers

Beginner
Yifan Zhou, Zeqi Xiao et al.Dec 18arXiv

This paper introduces Log-linear Sparse Attention (LLSA), a new way for Diffusion Transformers to focus only on the most useful information using a smart, layered search.

#Log-linear Sparse Attention#Hierarchical Top-K#Hierarchical KV Enrichment

Hearing to Translate: The Effectiveness of Speech Modality Integration into LLMs

Beginner
Sara Papi, Javier Garcia Gilabert et al.Dec 18arXiv

This paper builds a big, fair test called Hearing to Translate to check how well different speech translation systems work in the real world.

#speech translation#Speech-LLM#cascaded ASR-MT

LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding

Beginner
Chenkai Xu, Yijie Jin et al.Dec 18arXiv

This paper speeds up diffusion language models (dLLMs) by changing the order in which they fill in missing words.

#Diffusion LLM#Parallel decoding#Token Filling Order
1213141516