🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts8

Groups

📐Linear Algebra15📈Calculus & Differentiation10🎯Optimization14🎲Probability Theory12📊Statistics for ML9📡Information Theory10🔺Convex Optimization7🔢Numerical Methods6🕸Graph Theory for Deep Learning6🔵Topology for ML5🌐Differential Geometry6∞Measure Theory & Functional Analysis6🎰Random Matrix Theory5🌊Fourier Analysis & Signal Processing9🎰Sampling & Monte Carlo Methods10🧠Deep Learning Theory12🛡️Regularization Theory11👁️Attention & Transformer Theory10🎨Generative Model Theory11🔮Representation Learning10🎮Reinforcement Learning Mathematics9🔄Variational Methods8📉Loss Functions & Objectives10⏱️Sequence & Temporal Models8💎Geometric Deep Learning8

Category

🔷All∑Math⚙️Algo🗂️DS📚Theory

Level

AllBeginner
📚TheoryIntermediate

Stochastic Depth

Stochastic Depth randomly drops whole residual layers during training while keeping the full network at inference time.

#stochastic depth#resnet#residual block+12
📚TheoryIntermediate

Spectral Regularization

Spectral regularization controls how much a weight matrix can stretch inputs by constraining its largest singular value (spectral norm).

#spectral regularization
Intermediate
Advanced
Group:
Regularization Theory
#spectral norm
#power iteration
+11
📚TheoryIntermediate

Early Stopping

Early stopping halts training when the validation loss stops improving, preventing overfitting and saving compute.

#early stopping#validation loss#patience+11
📚TheoryIntermediate

Label Smoothing

Label smoothing replaces a hard one-hot target with a slightly softened distribution to reduce model overconfidence.

#label smoothing#cross-entropy#softmax+12
📚TheoryIntermediate

Data Augmentation Theory

Data augmentation expands the training distribution by applying label-preserving transformations to inputs, which lowers overfitting and improves generalization.

#data augmentation#vicinal risk minimization#invariance+12
📚TheoryIntermediate

Layer Normalization

Layer Normalization rescales and recenters each sample across its feature dimensions, making it independent of batch size.

#layer normalization#gamma beta#feature normalization+12
📚TheoryIntermediate

Batch Normalization

Batch Normalization rescales and recenters activations using mini-batch statistics to stabilize and speed up neural network training.

#batch normalization#mini-batch statistics#gamma beta+11
📚TheoryIntermediate

Dropout

Dropout randomly turns off (zeros) some neurons during training to prevent the network from memorizing the training data.

#dropout#inverted dropout#bernoulli mask+12