🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts11

Groups

📐Linear Algebra15📈Calculus & Differentiation10🎯Optimization14🎲Probability Theory12📊Statistics for ML9📡Information Theory10🔺Convex Optimization7🔢Numerical Methods6🕸Graph Theory for Deep Learning6🔵Topology for ML5🌐Differential Geometry6∞Measure Theory & Functional Analysis6🎰Random Matrix Theory5🌊Fourier Analysis & Signal Processing9🎰Sampling & Monte Carlo Methods10🧠Deep Learning Theory12🛡️Regularization Theory11👁️Attention & Transformer Theory10🎨Generative Model Theory11🔮Representation Learning10🎮Reinforcement Learning Mathematics9🔄Variational Methods8📉Loss Functions & Objectives10⏱️Sequence & Temporal Models8💎Geometric Deep Learning8

Category

🔷All∑Math⚙️Algo🗂️DS📚Theory

Level

AllBeginnerIntermediate
📚TheoryIntermediate

Stochastic Depth

Stochastic Depth randomly drops whole residual layers during training while keeping the full network at inference time.

#stochastic depth#resnet#residual block+12
📚TheoryIntermediate

Spectral Regularization

Spectral regularization controls how much a weight matrix can stretch inputs by constraining its largest singular value (spectral norm).

#spectral regularization
Advanced
Group:
Regularization Theory
#spectral norm
#power iteration
+11
📚TheoryIntermediate

Early Stopping

Early stopping halts training when the validation loss stops improving, preventing overfitting and saving compute.

#early stopping#validation loss#patience+11
📚TheoryIntermediate

Label Smoothing

Label smoothing replaces a hard one-hot target with a slightly softened distribution to reduce model overconfidence.

#label smoothing#cross-entropy#softmax+12
📚TheoryIntermediate

Data Augmentation Theory

Data augmentation expands the training distribution by applying label-preserving transformations to inputs, which lowers overfitting and improves generalization.

#data augmentation#vicinal risk minimization#invariance+12
📚TheoryIntermediate

Layer Normalization

Layer Normalization rescales and recenters each sample across its feature dimensions, making it independent of batch size.

#layer normalization#gamma beta#feature normalization+12
📚TheoryIntermediate

Batch Normalization

Batch Normalization rescales and recenters activations using mini-batch statistics to stabilize and speed up neural network training.

#batch normalization#mini-batch statistics#gamma beta+11
📚TheoryIntermediate

Dropout

Dropout randomly turns off (zeros) some neurons during training to prevent the network from memorizing the training data.

#dropout#inverted dropout#bernoulli mask+12
∑MathIntermediate

Elastic Net Regularization

Elastic Net regularization combines L1 (Lasso) and L2 (Ridge) penalties to produce models that are both sparse and stable.

#elastic net#lasso#ridge regression+12
∑MathIntermediate

L2 Regularization (Ridge/Weight Decay)

L2 regularization (also called ridge or weight decay) adds a penalty proportional to the sum of squared weights to discourage large parameters.

#l2 regularization#ridge regression#weight decay+12
∑MathIntermediate

L1 Regularization (Lasso)

L1 regularization (Lasso) adds a penalty \(\lambda \sum_{i=1}^{p} |w_i|\) to the loss, which pushes many coefficients exactly to zero and performs feature selection.

#lasso#l1 regularization#soft-thresholding+12