🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers19

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#scaling laws

Timer-S1: A Billion-Scale Time Series Foundation Model with Serial Scaling

Intermediate
Yong Liu, Xingjian Su et al.Mar 5arXiv

Timer-S1 is a huge time-series model (8.3B parameters, only 0.75B used per step) that predicts the future by thinking step-by-step inside one forward pass.

#time series forecasting#foundation models#Mixture-of-Experts

Beyond Language Modeling: An Exploration of Multimodal Pretraining

Intermediate
Shengbang Tong, David Fan et al.Mar 3arXiv

The paper trains one model from scratch to both read text and see images/videos, instead of starting from a language-only model.

#multimodal pretraining#representation autoencoder#RAE

A Very Big Video Reasoning Suite

Intermediate
Maijunxian Wang, Ruisi Wang et al.Feb 23arXiv

This paper builds a gigantic library of video puzzles (VBVR) so AI can practice not just making pretty videos, but actually thinking through what happens over time.

#video reasoning#rule-based evaluation#in-domain generalization

ArXiv-to-Model: A Practical Study of Scientific LM Training

Intermediate
Anuj GuptaFeb 19arXiv

This paper shows, step by step, how to train a 1.36-billion-parameter science-focused language model directly from raw arXiv LaTeX files using only 2 A100 GPUs.

#scientific language model#arXiv LaTeX#tokenization

MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models

Intermediate
Yitian Gong, Kuangwei Chen et al.Feb 11arXiv

This paper builds a new audio tokenizer, called MOSS-Audio-Tokenizer, that turns sound into tiny tokens the way text tokenizers turn sentences into words.

#audio tokenizer#causal transformer#residual vector quantization

daVinci-Agency: Unlocking Long-Horizon Agency Data-Efficiently

Intermediate
Mohan Jiang, Dayuan Fu et al.Feb 2arXiv

Long tasks trip up most AIs because they lose track of goals and make small mistakes that snowball over many steps.

#long-horizon agency#pull request chains#software evolution

MetricAnything: Scaling Metric Depth Pretraining with Noisy Heterogeneous Sources

Intermediate
Baorui Ma, Jiahui Yang et al.Jan 29arXiv

Metric Anything is a new way to teach AI real, ruler-like distances (metric depth) from very mixed and noisy 3D data.

#metric depth estimation#sparse metric prompt#monocular depth

Shaping capabilities with token-level data filtering

Intermediate
Neil Rathi, Alec RadfordJan 29arXiv

The paper shows a simple way to teach AI models what not to learn by removing only the exact words (tokens) related to unwanted topics during pretraining.

#token-level data filtering#capability shaping#sparse autoencoders

SERA: Soft-Verified Efficient Repository Agents

Intermediate
Ethan Shen, Danny Tormoen et al.Jan 28arXiv

SERA is a new, low-cost way to train coding helpers (agents) that learn the style and secrets of your own codebase.

#SERA#Soft-Verified Generation#soft verification

TSRBench: A Comprehensive Multi-task Multi-modal Time Series Reasoning Benchmark for Generalist Models

Beginner
Fangxu Yu, Xingang Guo et al.Jan 26arXiv

TSRBench is a giant test that checks if AI models can understand and reason about data that changes over time, like heartbeats, stock prices, and weather.

#time series reasoning#multimodal benchmark#perception

EEG Foundation Models: Progresses, Benchmarking, and Open Problems

Intermediate
Dingkun Liu, Yuheng Chen et al.Jan 25arXiv

This paper builds a fair, big playground (a benchmark) to test many EEG foundation models side-by-side on the same rules.

#EEG foundation models#brain-computer interface#self-supervised learning

Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers

Intermediate
Maksim Velikanov, Ilyas Chahed et al.Jan 8arXiv

The paper shows that big language models often get stuck with weight sizes set by training hyperparameters instead of by the data, which quietly hurts performance.

#learnable multipliers#weight decay#noise–WD equilibrium
12